Self-Imitation Learning
About
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.
Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari 2600 Montezuma's Revenge ALE (test) | Score2.50e+3 | 24 | |
| Reinforcement Learning | Atari 2600 Private Eye ALE (test) | Score8.68e+3 | 19 | |
| Reinforcement Learning | Atari 2600 Gravitar ALE (test) | Score2.72e+3 | 19 | |
| Reinforcement Learning | Atari 2600 Freeway ALE (test) | Score34 | 14 | |
| Reinforcement Learning | Atari 2600 Frostbite ALE (test) | Avg Reward6.44e+3 | 13 | |
| Reinforcement Learning | Atari 2600 Venture ALE (test) | Score0.00e+0 | 9 | |
| Reinforcement Learning | Atari 2600 Hero ALE (test) | -- | 4 | |
| Reinforcement Learning | Atari 49 games | Median Human-Normalized Score138.7 | 3 |
Showing 8 of 8 rows