Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VDAWorld: World Modelling via VLM-Directed Abstraction and Simulation

About

Generative video models, a leading approach to world modeling, face fundamental limitations. They often violate physical and logical rules, lack interactivity, and operate as opaque black boxes ill-suited for building structured, queryable worlds. To overcome these challenges, we propose a new paradigm focused on distilling an image caption pair into a tractable, abstract representation optimized for simulation. We introduce VDAWorld, a framework where a Vision-Language Model (VLM) acts as an intelligent agent to orchestrate this process. The VLM autonomously constructs a grounded (2D or 3D) scene representation by selecting from a suite of vision tools, and accordingly chooses a compatible physics simulator (e.g., rigid body, fluid) to act upon it. VDAWorld can then infer latent dynamics from the static scene to predict plausible future states. Our experiments show that this combination of intelligent abstraction and adaptive simulation results in a versatile world model capable of producing high quality simulations across a wide range of dynamic scenarios.

Felix O'Mahony, Roberto Cipolla, Ayush Tewari• 2025

Related benchmarks

TaskDatasetResultRank
Physical Plausibility EvaluationPhysics-IQ (modified)
Solid Mechanics Score51.1
6
Showing 1 of 1 rows

Other info

Follow for update