Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Smaller World Models for Reinforcement Learning

About

Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.

Jan Robine, Tobias Uelwer, Stefan Harmeling• 2020

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 100K 36 games
Alien Score423.3
4
Autonomous DrivingHighway-Env merge v0 (100 evaluation episodes)
Collision Rate0.29
3
Autonomous Driving Planningmerge v0 (test)
Avg Episode Reward30.114
3
Autonomous DrivingHighway-Env v0 (100 evaluation episodes)
Collision Rate100
3
Autonomous DrivingHighway-Env roundabout v0 (100 evaluation episodes)
Collision Rate57
3
Autonomous Driving Planninghighway v0 (test)
Avg Episode Reward3.121
3
Autonomous Driving Planningroundabout v0 (test)
Average Episode Reward3.826
3
Showing 7 of 7 rows

Other info

Follow for update