Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Imitation Learning

About

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.

Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee• 2018

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 2600 Montezuma's Revenge ALE (test)
Score2.50e+3
24
Reinforcement LearningAtari 2600 Private Eye ALE (test)
Score8.68e+3
19
Reinforcement LearningAtari 2600 Gravitar ALE (test)
Score2.72e+3
19
Reinforcement LearningAtari 2600 Freeway ALE (test)
Score34
14
Reinforcement LearningAtari 2600 Frostbite ALE (test)
Avg Reward6.44e+3
13
Reinforcement LearningAtari 2600 Venture ALE (test)
Score0.00e+0
9
Reinforcement LearningAtari 2600 Hero ALE (test)--
4
Reinforcement LearningAtari 49 games
Median Human-Normalized Score138.7
3
Showing 8 of 8 rows

Other info

Code

Follow for update