Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models
About
Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari 2600 MONTEZUMA'S REVENGE | Score0.00e+0 | 45 | |
| Reinforcement Learning | Atari 2600 | Alien Score3.07e+3 | 15 | |
| Reinforcement Learning | Atari 2600 (test) | Alien1.44e+3 | 10 | |
| Keep-Away (2v2) | Multi-agent particle environment (MPE) (test) | Mean Episode Extrinsic Reward11.88 | 7 | |
| Predator-Prey (2v2) | Multi-agent particle environment (MPE) (test) | Mean Episode Extrinsic Reward-6.5 | 7 | |
| Physical Deception (2v1) | Multi-agent particle environment (MPE) (test) | Mean Extrinsic Reward68.8 | 7 | |
| 3v1 with keeper (3v2) | Google Research Football (GRF) (test) | Mean Extrinsic Rewards0.024 | 6 | |
| Heterogeneous Navigation (4v0) | Multi-agent particle environment (MPE) (test) | Mean Episode Extrinsic Reward286.2 | 6 | |
| Cooperative Navigation (3v0) | Multi-agent particle environment (MPE) (test) | Mean Episode Extrinsic Reward133.9 | 6 |