IQ-Learn: Inverse soft-Q Learning for Imitation
About
In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment's dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL walker2d-expert v2 | Normalized Score46.6 | 56 | |
| Offline Reinforcement Learning | D4RL halfcheetah-expert v2 | Normalized Score31.2 | 56 | |
| Offline Reinforcement Learning | D4RL hopper-expert v2 | Normalized Score37.3 | 56 | |
| Offline Imitation Learning | D4RL Ant v2 (expert) | Normalized Score85.9 | 20 | |
| Continuous Control | MuJoCo Ant | Average Reward4.68e+3 | 12 | |
| Continuous Control | MuJoCo HalfCheetah | Average Reward5.15e+3 | 12 | |
| Imitation Learning | HalfCheetah one-shot v2 | Normalized Score1.2 | 11 | |
| Imitation Learning | Hopper one-shot v2 | Normalized Score18.8 | 11 | |
| Imitation Learning | Ant one-shot v2 | Normalized Score19.3 | 11 | |
| Imitation Learning | Walker2d one-shot v2 | Normalized Score4 | 11 |