Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncovering Untapped Potential in Sample-Efficient World Model Agents

About

World model (WM) agents enable sample-efficient reinforcement learning by learning policies entirely from simulated experience. However, existing token-based world models (TBWMs) are limited to visual inputs and discrete actions, restricting their adoption and applicability. Moreover, although both intrinsic motivation and prioritized WM replay have shown promise in improving WM performance and generalization, they remain underexplored in this setting, particularly in combination. We introduce Simulus, a highly modular TBWM agent that integrates (1) a modular multi-modality tokenization framework, (2) intrinsic motivation, (3) prioritized WM replay, and (4) regression-as-classification for reward and return prediction. Simulus achieves state-of-the-art sample efficiency for planning-free WMs across three diverse benchmarks. Ablation studies reveal the individual contribution of each component while highlighting their synergy. Our code and model weights are publicly available at https://github.com/leor-c/Simulus.

Lior Cohen, Kaixin Wang, Bingyi Kang, Uri Gadot, Shie Mannor• 2025

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 100K (test)
Mean Score1.609
21
Showing 1 of 1 rows

Other info

Follow for update