Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving

About

Scaling Vision-Language-Action (VLA) models on large-scale data offers a promising path to achieving a more generalized driving intelligence. However, VLA models are limited by a ``supervision deficit'': the vast model capacity is supervised by sparse, low-dimensional actions, leaving much of their representational power underutilized. To remedy this, we propose \textbf{DriveVLA-W0}, a training paradigm that employs world modeling to predict future images. This task generates a dense, self-supervised signal that compels the model to learn the underlying dynamics of the driving environment. We showcase the paradigm's versatility by instantiating it for two dominant VLA archetypes: an autoregressive world model for VLAs that use discrete visual tokens, and a diffusion world model for those operating on continuous visual features. Building on the rich representations learned from world modeling, we introduce a lightweight action expert to address the inference latency for real-time deployment. Extensive experiments on the NAVSIM v1/v2 benchmark and a 680x larger in-house dataset demonstrate that DriveVLA-W0 significantly outperforms BEV and VLA baselines. Crucially, it amplifies the data scaling law, showing that performance gains accelerate as the training dataset size increases.

Yingyan Li, Shuyao Shang, Weisong Liu, Bing Zhan, Haochen Wang, Yuqi Wang, Yuntao Chen, Xiaoman Wang, Yasong An, Chufeng Tang, Lu Hou, Lue Fan, Zhaoxiang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Autonomous DrivingNAVSIM v1 (test)
NC99.3
99
Autonomous Driving PlanningNAVSIM v1
NC99.3
17
Closed-loop PlanningNAVSIM Navtest (test)
PDMS87.2
16
Motion PlanningNAVSIM v2 (test)
NC98.5
15
Closed-loop PlanningNAVSIM v1
NC98.7
13
Motion PlanningNAVSIM v1.1 (test)
NC98.7
10
Closed-loop PlanningNAVSIM v2
NC98.5
7
Showing 7 of 7 rows

Other info

Follow for update