X-Driver: Explainable Autonomous Driving with Vision-Language Models
About
End-to-end autonomous driving has advanced significantly, offering benefits such as system simplicity and stronger driving performance in both open-loop and closed-loop settings than conventional pipelines. However, existing frameworks still suffer from low success rates in closed-loop evaluations, highlighting their limitations in real-world deployment. In this paper, we introduce X-Driver, a unified multi-modal large language models(MLLMs) framework designed for closed-loop autonomous driving, leveraging Chain-of-Thought(CoT) and autoregressive modeling to enhance perception and decision-making. We validate X-Driver across multiple autonomous driving tasks using public benchmarks in CARLA simulation environment, including Bench2Drive[6]. Our experimental results demonstrate superior closed-loop performance, surpassing the current state-of-the-art(SOTA) while improving the interpretability of driving decisions. These findings underscore the importance of structured reasoning in end-to-end driving and establish X-Driver as a strong baseline for future research in closed-loop autonomous driving.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Closed-loop Planning | Bench2Drive | Driving Score51.7 | 90 | |
| Closed-loop Autonomous Driving | Bench2Drive closed-loop | DS51.7 | 24 | |
| Closed-loop Autonomous Driving | bench2drive 50 closed-loop | Driving Score57.8 | 3 |