Never Give Up: Learning Directed Exploration Strategies
About
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators (UVFA) to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | ALE Atari 57 games | HWRB8 | 16 | |
| Reinforcement Learning | Atari 2600 Qbert | Score6.85e+5 | 15 | |
| Reinforcement Learning | Atari Breakout | Mean Return864 | 11 | |
| Reinforcement Learning | Atari Space Invaders | Mean Episode Return4.53e+4 | 11 | |
| Reinforcement Learning | Atari 57 full suite 2600 | Games Above Human Count51 | 11 | |
| Reinforcement Learning | Atari Beam Rider | Mean Return1.67e+5 | 7 | |
| Reinforcement Learning | Atari Pong | Mean Episode Return19.6 | 7 |