Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Policy Learning with World-Action Model

About

This paper presents the World-Action Model (WAM), an action-regularized world model that jointly reasons over future visual observations and the actions that drive state transitions. Unlike conventional world models trained solely via image prediction, WAM incorporates an inverse dynamics objective into DreamerV2 that predicts actions from latent state transitions, encouraging the learned representations to capture action-relevant structure critical for downstream control. We evaluate WAM on enhancing policy learning across eight manipulation tasks from the CALVIN benchmark. We first pretrain a diffusion policy via behavioral cloning on world model latents, then refine it with model-based PPO inside the frozen world model. Without modifying the policy architecture or training procedure, WAM improves average behavioral cloning success from 59.4% to 71.2% over DreamerV2 and DiWA baselines. After PPO fine-tuning, WAM achieves 92.8% average success versus 79.8% for the baseline, with two tasks reaching 100%, using 8.7x fewer training steps.

Yuci Han, Alper Yilmaz• 2026

Related benchmarks

TaskDatasetResultRank
turn off lightbulbCALVIN
Success Rate100
6
Close DrawerCALVIN
Success Rate96.6
3
move slider leftCALVIN
Success Rate87.5
3
move slider rightCALVIN
Success Rate89.7
3
open drawerCALVIN
Success Rate96.7
3
turn on LEDCALVIN
Success Rate96.6
3
turn on lightbulbCALVIN
Success Rate100
3
Showing 7 of 7 rows

Other info

Follow for update