Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VLP: Vision Language Planning for Autonomous Driving

About

Autonomous driving is a complex and challenging task that aims at safe motion planning through scene understanding and reasoning. While vision-only autonomous driving methods have recently achieved notable performance, through enhanced scene understanding, several key issues, including lack of reasoning, low generalization performance and long-tail scenarios, still need to be addressed. In this paper, we present VLP, a novel Vision-Language-Planning framework that exploits language models to bridge the gap between linguistic understanding and autonomous driving. VLP enhances autonomous driving systems by strengthening both the source memory foundation and the self-driving car's contextual understanding. VLP achieves state-of-the-art end-to-end planning performance on the challenging NuScenes dataset by achieving 35.9\% and 60.5\% reduction in terms of average L2 error and collision rates, respectively, compared to the previous best method. Moreover, VLP shows improved performance in challenging long-tail scenarios and strong generalization capabilities when faced with new urban environments.

Chenbin Pan, Burhaneddin Yaman, Tommaso Nesti, Abhirup Mallik, Alessandro G Allievi, Senem Velipasalar, Liu Ren• 2024

Related benchmarks

TaskDatasetResultRank
Open-loop planningnuScenes (val)
L2 Error (3s)0.78
151
3D Multi-Object TrackingnuScenes (val)
AMOTA36.8
115
PlanningnuScenes v1.0-trainval (val)
ST-P3 L2 Error (1s)0.3
39
MotionnuScenes (val)
minADE0.68
34
Open-loop planningNuScenes v1.0 (test)
L2 Error (1s)0.3
28
Trajectory PlanningnuScenes 1.0 (test)
L2 Error (Average)0.63
14
PlanningnuScenes Boston v1.0-trainval (test)
Avg L2 Error0.73
4
Showing 7 of 7 rows

Other info

Follow for update