Smaller World Models for Reinforcement Learning
About
Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.
Jan Robine, Tobias Uelwer, Stefan Harmeling• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari 100K 36 games | Alien Score423.3 | 4 | |
| Autonomous Driving | Highway-Env merge v0 (100 evaluation episodes) | Collision Rate0.29 | 3 | |
| Autonomous Driving Planning | merge v0 (test) | Avg Episode Reward30.114 | 3 | |
| Autonomous Driving | Highway-Env v0 (100 evaluation episodes) | Collision Rate100 | 3 | |
| Autonomous Driving | Highway-Env roundabout v0 (100 evaluation episodes) | Collision Rate57 | 3 | |
| Autonomous Driving Planning | highway v0 (test) | Avg Episode Reward3.121 | 3 | |
| Autonomous Driving Planning | roundabout v0 (test) | Average Episode Reward3.826 | 3 |
Showing 7 of 7 rows