Enhancing Policy Learning with World-Action Model
About
This paper presents the World-Action Model (WAM), an action-regularized world model that jointly reasons over future visual observations and the actions that drive state transitions. Unlike conventional world models trained solely via image prediction, WAM incorporates an inverse dynamics objective into DreamerV2 that predicts actions from latent state transitions, encouraging the learned representations to capture action-relevant structure critical for downstream control. We evaluate WAM on enhancing policy learning across eight manipulation tasks from the CALVIN benchmark. We first pretrain a diffusion policy via behavioral cloning on world model latents, then refine it with model-based PPO inside the frozen world model. Without modifying the policy architecture or training procedure, WAM improves average behavioral cloning success from 59.4% to 71.2% over DreamerV2 and DiWA baselines. After PPO fine-tuning, WAM achieves 92.8% average success versus 79.8% for the baseline, with two tasks reaching 100%, using 8.7x fewer training steps.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| turn off lightbulb | CALVIN | Success Rate100 | 6 | |
| Close Drawer | CALVIN | Success Rate96.6 | 3 | |
| move slider left | CALVIN | Success Rate87.5 | 3 | |
| move slider right | CALVIN | Success Rate89.7 | 3 | |
| open drawer | CALVIN | Success Rate96.7 | 3 | |
| turn on LED | CALVIN | Success Rate96.6 | 3 | |
| turn on lightbulb | CALVIN | Success Rate100 | 3 |