Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniDriveVLA: Unifying Understanding, Perception, and Action Planning for Autonomous Driving

About

Vision-Language-Action (VLA) models have recently emerged in autonomous driving, with the promise of leveraging rich world knowledge to improve the cognitive capabilities of driving systems. However, adapting such models for driving tasks currently faces a critical dilemma between spatial perception and semantic reasoning. Consequently, existing VLA systems are forced into suboptimal compromises: directly adopting 2D Vision-Language Models yields limited spatial perception, whereas enhancing them with 3D spatial representations often impairs the native reasoning capacity of VLMs. We argue that this dilemma largely stems from the coupled optimization of spatial perception and semantic reasoning within shared model parameters. To overcome this, we propose UniDriveVLA, a Unified Driving Vision-Language-Action model based on Mixture-of-Transformers that addresses the perception-reasoning conflict via expert decoupling. Specifically, it comprises three experts for driving understanding, scene perception, and action planning, which are coordinated through masked joint attention. In addition, we combine a sparse perception paradigm with a three-stage progressive training strategy to improve spatial perception while maintaining semantic reasoning capability. Extensive experiments show that UniDriveVLA achieves state-of-the-art performance in open-loop evaluation on nuScenes and closed-loop evaluation on Bench2Drive. Moreover, it demonstrates strong performance across a broad range of perception, prediction, and understanding tasks, including 3D detection, online mapping, motion forecasting, and driving-oriented VQA, highlighting its broad applicability as a unified model for autonomous driving. Code and model have been released at https://github.com/xiaomi-research/unidrivevla

Yongkang Li, Lijun Zhou, Sixu Yan, Bencheng Liao, Tianyi Yan, Kaixin Xiong, Long Chen, Hongwei Xie, Bing Wang, Guang Chen, Hangjun Ye, Wenyu Liu, Haiyang Sun, Xinggang Wang• 2026

Related benchmarks

TaskDatasetResultRank
Chart Question AnsweringChartQA--
356
Multimodal UnderstandingMMStar--
324
Closed-loop PlanningBench2Drive
Driving Score78.37
137
Multimodal UnderstandingMME
Score1.88e+3
83
Trajectory PlanningnuScenes
ST-P3 L2 Error (1s)0.23
49
MotionnuScenes (val)
minADE1.264
49
Object DetectionnuScenes (val)
mAP40.7
48
Open-loop planningBench2Drive
Average L2 Error0.72
36
Diagram UnderstandingAI2D
AI2D Score76.3
33
Online MappingnuScenes (val)
mAP53.5
32
Showing 10 of 15 rows

Other info

GitHub

Follow for update