Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer-based World Models Are Happy With 100k Interactions

About

Deep neural networks have been successful in many reinforcement learning settings. However, compared to human learners they are overly data hungry. To build a sample-efficient world model, we apply a transformer to real-world episodes in an autoregressive manner: not only the compact latent states and the taken actions but also the experienced or predicted rewards are fed into the transformer, so that it can attend flexibly to all three modalities at different time steps. The transformer allows our world model to access previous states directly, instead of viewing them through a compressed recurrent state. By utilizing the Transformer-XL architecture, it is able to learn long-term dependencies while staying computationally efficient. Our transformer-based world model (TWM) generates meaningful, new experience, which is used to train a policy that outperforms previous model-free and model-based reinforcement learning algorithms on the Atari 100k benchmark.

Jan Robine, Marc H\"oftmann, Tobias Uelwer, Stefan Harmeling• 2023

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 100K (test)
Mean Score1.746
21
Reinforcement LearningAtari 100k
Alien Score674.6
18
Reinforcement LearningAtari 100k steps (overall)
Game Score: Boxing77.5
9
Reinforcement LearningAtari Breakout 100k (test)
HNS63.5
6
Reinforcement LearningAtari Assault 100k (test)
HNS0.886
6
Showing 5 of 5 rows

Other info

Follow for update