Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
About
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose adverserial inverse reinforcement learning (AIRL), a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation. We demonstrate that AIRL is able to recover reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. Our experiments show that AIRL greatly outperforms prior methods in these transfer settings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Imitation Learning | Half Cheetah | Mean Score1.84e+3 | 6 | |
| Imitation Learning | Pendulum | Mean Score-204.7 | 6 | |
| Transfer Learning | Ant Disabled Morphology (test) | Mean Score130.3 | 6 | |
| Imitation Learning | Ant | Mean Score1.24e+3 | 6 | |
| Imitation Learning | Swimmer | Mean Score139.1 | 6 | |
| Transfer Learning | Point Mass-Maze shifting (test) | Mean Score-31.2 | 6 | |
| Policy Generalization | Point Maze (test) | Average Return-18.15 | 6 | |
| Policy Generalization | Ant (test) | Average Return127.6 | 6 | |
| Policy Generalization | Sweeper (test) | Average Return-152.8 | 6 | |
| Policy Generalization | Sawyer Pusher (test) | Average Return-51.56 | 6 |