Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Human-level Atari 200x faster

About

The task of building general agents that perform well over a wide range of tasks has been an important goal in reinforcement learning since its inception. The problem has been subject of research of a large body of work, with performance frequently measured by observing scores over the wide range of environments contained in the Atari 57 benchmark. Agent57 was the first agent to surpass the human benchmark on all 57 games, but this came at the cost of poor data-efficiency, requiring nearly 80 billion frames of experience to achieve. Taking Agent57 as a starting point, we employ a diverse set of strategies to achieve a 200-fold reduction of experience needed to out perform the human baseline. We investigate a range of instabilities and bottlenecks we encountered while reducing the data regime, and propose effective solutions to build a more robust and efficient agent. We also demonstrate competitive performance with high-performing methods such as Muesli and MuZero. The four key components to our approach are (1) an approximate trust region method which enables stable bootstrapping from the online network, (2) a normalisation scheme for the loss and priorities which improves robustness when learning a set of value functions with a wide range of scales, (3) an improved architecture employing techniques from NFNets in order to leverage deeper networks without the need for normalization layers, and (4) a policy distillation method which serves to smooth out the instantaneous greedy policy overtime.

Steven Kapturowski, V\'ictor Campos, Ray Jiang, Nemanja Raki\'cevi\'c, Hado van Hasselt, Charles Blundell, Adri\`a Puigdom\`enech Badia• 2022

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 57
Atlantis1.67e+6
21
Reinforcement LearningAtari 2600 57 games--
20
Atari Game PlayingAtari-57 (test)
Alien Score4.81e+4
8
Showing 3 of 3 rows

Other info

Follow for update