Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LMAD: Integrated End-to-End Vision-Language Model for Explainable Autonomous Driving

About

Large vision-language models (VLMs) have shown promising capabilities in scene understanding, enhancing the explainability of driving behaviors and interactivity with users. Existing methods primarily fine-tune VLMs on on-board multi-view images and scene reasoning text, but this approach often lacks the holistic and nuanced scene recognition and powerful spatial awareness required for autonomous driving, especially in complex situations. To address this gap, we propose a novel vision-language framework tailored for autonomous driving, called LMAD. Our framework emulates modern end-to-end driving paradigms by incorporating comprehensive scene understanding and a task-specialized structure with VLMs. In particular, we introduce preliminary scene interaction and specialized expert adapters within the same driving task structure, which better align VLMs with autonomous driving scenarios. Furthermore, our approach is designed to be fully compatible with existing VLMs while seamlessly integrating with planning-oriented driving systems. Extensive experiments on the DriveLM and nuScenes-QA datasets demonstrate that LMAD significantly boosts the performance of existing VLMs on driving reasoning tasks,setting a new standard in explainable autonomous driving.

Nan Song, Bozhou Zhang, Xiatian Zhu, Jiankang Deng, Li Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Graph VQADriveLM (test)
BLEU-454.49
6
Showing 1 of 1 rows

Other info

Follow for update