Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Closing the Train-Test Gap in World Models for Gradient-Based Planning

About

World models paired with model predictive control (MPC) can be trained offline on large-scale datasets of expert trajectories and enable generalization to a wide range of planning tasks at inference time. Compared to traditional MPC procedures, which rely on slow search algorithms or on iteratively solving optimization problems exactly, gradient-based planning offers a computationally efficient alternative. However, the performance of gradient-based planning has thus far lagged behind that of other approaches. In this paper, we propose improved methods for training world models that enable efficient gradient-based planning. We begin with the observation that although a world model is trained on a next-state prediction objective, it is used at test-time to instead estimate a sequence of actions. The goal of our work is to close this train-test gap. To that end, we propose train-time data synthesis techniques that enable significantly improved gradient-based planning with existing world models. At test time, our approach outperforms or matches the classical gradient-free cross-entropy method (CEM) across a variety of object manipulation and navigation tasks in 10% of the time budget.

Arjun Parthasarathy, Nimit Kalra, Rohun Agrawal, Yann LeCun, Oumayma Bounou, Pavel Izmailov, Micah Goldblum• 2025

Related benchmarks

TaskDatasetResultRank
PlanningPushT
Success Rate94
24
PlanningPointMaze
Success Rate98
18
PlanningWall
Success Rate0.94
18
Robotic Manipulation PlanningRope (val)
Chamfer Distance0.82
4
Robotic Manipulation PlanningGranular (val)
Chamfer Distance0.24
4
Showing 5 of 5 rows

Other info

Follow for update