Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SimLingo: Vision-Only Closed-Loop Autonomous Driving with Language-Action Alignment

About

Integrating large language models (LLMs) into autonomous driving has attracted significant attention with the hope of improving generalization and explainability. However, existing methods often focus on either driving or vision-language understanding but achieving both high driving performance and extensive language understanding remains challenging. In addition, the dominant approach to tackle vision-language understanding is using visual question answering. However, for autonomous driving, this is only useful if it is aligned with the action space. Otherwise, the model's answers could be inconsistent with its behavior. Therefore, we propose a model that can handle three different tasks: (1) closed-loop driving, (2) vision-language understanding, and (3) language-action alignment. Our model SimLingo is based on a vision language model (VLM) and works using only camera, excluding expensive sensors like LiDAR. SimLingo obtains state-of-the-art performance on the widely used CARLA simulator on the Bench2Drive benchmark and is the winning entry at the CARLA challenge 2024. Additionally, we achieve strong results in a wide variety of language-related tasks while maintaining high driving performance.

Katrin Renz, Long Chen, Elahe Arani, Oleg Sinavski• 2025

Related benchmarks

TaskDatasetResultRank
Closed-loop PlanningBench2Drive
Driving Score86.02
90
Closed-loop Autonomous DrivingBench2Drive
Driving Score (DS)85.07
21
Autonomous DrivingBench2Drive local benchmark
DS85.94
14
Autonomous DrivingCARLA Leaderboard 2.0 (official leaderboard)
Driving Score6.25
13
End-to-end DrivingCARLA Bench2Drive v1 (test)
Driving Score85.1
11
End-to-end DrivingCARLA Longest6 v2 (test)
Driving Score22
11
Autonomous DrivingCARLA Leaderboard Sensor Track 2.0 (official)
DS6.87
4
Showing 7 of 7 rows

Other info

Follow for update