Unifying Count-Based Exploration and Intrinsic Motivation
About
We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari 2600 MONTEZUMA'S REVENGE | Score3.71e+3 | 45 | |
| Atari Game Playing | Pitfall! | Score0.00e+0 | 25 | |
| Reinforcement Learning | Atari 2600 Montezuma's Revenge ALE (test) | Score273.7 | 24 | |
| Reinforcement Learning | Atari 2600 Private Eye ALE (test) | Score99.32 | 19 | |
| Reinforcement Learning | Atari 2600 Gravitar ALE (test) | Score239 | 19 | |
| Reinforcement Learning | Atari 2600 Freeway ALE (test) | Score30.48 | 14 | |
| Reinforcement Learning | Atari 2600 Frostbite ALE (test) | Avg Reward352 | 13 | |
| Reinforcement Learning | Atari 2600 Arcade Learning Environment (evaluation) | Montezuma's Revenge Score399.5 | 11 | |
| Reinforcement Learning | Atari 2600 GRAVITAR | GRAVITAR Score199.8 | 10 | |
| Reinforcement Learning | Arcade Learning Environment Atari 2600 2013 (full set) | Asterix Score7.92e+3 | 9 |