Online and Offline Reinforcement Learning by Planning with a Learned Model
About
Learning efficiently from small amounts of data has long been the focus of model-based reinforcement learning, both for the online case when interacting with the environment and the offline case when learning from a fixed dataset. However, to date no single unified algorithm could demonstrate state-of-the-art results in both settings. In this work, we describe the Reanalyse algorithm which uses model-based policy and value improvement operators to compute new improved training targets on existing data points, allowing efficient learning for data budgets varying by several orders of magnitude. We further show that Reanalyse can also be used to learn entirely from demonstrations without any environment interactions, as in the case of offline Reinforcement Learning (offline RL). Combining Reanalyse with the MuZero algorithm, we introduce MuZero Unplugged, a single unified algorithm for any data budget, including offline RL. In contrast to previous work, our algorithm does not require any special adaptations for the off-policy or offline RL settings. MuZero Unplugged sets new state-of-the-art results in the RL Unplugged offline RL benchmark as well as in the online RL benchmark of Atari in the standard 200 million frame setting.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari 2600 57 games (test) | Median Human-Normalized Score1.01e+3 | 15 | |
| Atari Game Playing | Atari 57 games 200M environment frames | Median Human-Normalized Score1.05e+3 | 11 | |
| Reinforcement Learning | Atari 57 (ALE) 200M frames sticky actions | Median Human-Normalized Score1.05e+3 | 9 | |
| Offline Reinforcement Learning | RL Unplugged Atari 46 games | Median Human Normalized Score265.3 | 8 | |
| Offline Reinforcement Learning | RL Unplugged DM Control v1 | Cartpole Swingup Score343.3 | 6 |