DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving
About
Scaling Vision-Language-Action (VLA) models on large-scale data offers a promising path to achieving a more generalized driving intelligence. However, VLA models are limited by a ``supervision deficit'': the vast model capacity is supervised by sparse, low-dimensional actions, leaving much of their representational power underutilized. To remedy this, we propose \textbf{DriveVLA-W0}, a training paradigm that employs world modeling to predict future images. This task generates a dense, self-supervised signal that compels the model to learn the underlying dynamics of the driving environment. We showcase the paradigm's versatility by instantiating it for two dominant VLA archetypes: an autoregressive world model for VLAs that use discrete visual tokens, and a diffusion world model for those operating on continuous visual features. Building on the rich representations learned from world modeling, we introduce a lightweight action expert to address the inference latency for real-time deployment. Extensive experiments on the NAVSIM v1/v2 benchmark and a 680x larger in-house dataset demonstrate that DriveVLA-W0 significantly outperforms BEV and VLA baselines. Crucially, it amplifies the data scaling law, showing that performance gains accelerate as the training dataset size increases.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Autonomous Driving | NAVSIM v1 (test) | NC99.3 | 99 | |
| Autonomous Driving Planning | NAVSIM v1 | NC99.3 | 17 | |
| Closed-loop Planning | NAVSIM Navtest (test) | PDMS87.2 | 16 | |
| Motion Planning | NAVSIM v2 (test) | NC98.5 | 15 | |
| Closed-loop Planning | NAVSIM v1 | NC98.7 | 13 | |
| Motion Planning | NAVSIM v1.1 (test) | NC98.7 | 10 | |
| Closed-loop Planning | NAVSIM v2 | NC98.5 | 7 |