Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards

About

Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo.

Siddharth Reddy, Anca D. Dragan, Sergey Levine• 2019

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score19.9
56
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score25.5
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score8.8
56
Offline Imitation LearningD4RL Ant v2 (expert)
Normalized Score44.3
20
Imitation LearningHalfCheetah one-shot v2
Normalized Score1.1
11
Imitation LearningWalker2d one-shot v2
Normalized Score4.6
11
Imitation LearningHopper one-shot v2
Normalized Score16.8
11
Imitation LearningAnt one-shot v2
Normalized Score12.5
11
Cross-domain Offline Imitation Learning from Demonstrations (C-off-LfD)D4RL MuJoCo reward-free v2 (medium, medium-replay, medium-expert)
Hopper-v2 Return (medium)34.4
7
Single-domain Offline Imitation Learning from Demonstrations (S-off-LfD)D4RL MuJoCo reward-free v2 (medium, medium-replay, medium-expert)
Hopper-v2 (m) Score32.6
7
Showing 10 of 13 rows

Other info

Follow for update