Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Robust Rewards with Adversarial Inverse Reinforcement Learning

About

Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.

Justin Fu, Katie Luo, Sergey Levine• 2017

Related benchmarks

TaskDatasetResultRank
Imitation LearningMujoco
Hopper Reward1.16
15
Imitation LearningDataset-1
MAE0.00e+0
13
Imitation LearningDataset 2
MAE0.00e+0
13
Imitation LearningDataset 3
MAE0.00e+0
13
Imitation LearningDataset 4
MAE0.00e+0
13
Imitation LearningDataset 5
MAE0.00e+0
13
Inverse Reinforcement LearningDataset-1
MSE0.00e+0
13
Inverse Reinforcement LearningDataset 2
MSE0.00e+0
13
Inverse Reinforcement LearningDataset 3
MSE0.00e+0
13
Inverse Reinforcement LearningDataset 4
MSE0.00e+0
13
Showing 10 of 24 rows

Other info

Code

Follow for update