Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers

About

World model-based searching and planning are widely recognized as a promising path toward human-level physical intelligence. However, current driving world models primarily rely on video diffusion models, which specialize in visual generation but lack the flexibility to incorporate other modalities like action. In contrast, autoregressive transformers have demonstrated exceptional capability in modeling multimodal data. Our work aims to unify both driving model simulation and trajectory planning into a single sequence modeling problem. We introduce a multimodal driving language based on interleaved image and action tokens, and develop DrivingGPT to learn joint world modeling and planning through standard next-token prediction. Our DrivingGPT demonstrates strong performance in both action-conditioned video generation and end-to-end planning, outperforming strong baselines on large-scale nuPlan and NAVSIM benchmarks.

Yuntao Chen, Yuqi Wang, Zhaoxiang Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Autonomous DrivingNAVSIM v1 (test)
NC98.9
99
PlanningNAVSIM (navtest)
NC98.9
53
Video GenerationnuScenes (val)
FVD142.6
37
PlanningNAVSIM (test)
PDMS82.4
22
Closed-loop PlanningNAVSIM Navtest (test)
PDMS82.4
16
Closed-loop PlanningNAVSIM v1
NC98.9
13
Video GenerationNAVSIM (navtest)
FID15.04
6
Video GenerationNav. (test)
FVD142.6
4
Showing 8 of 8 rows

Other info

Code

Follow for update