Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Impromptu VLA: Open Weights and Open Data for Driving Vision-Language-Action Models

About

Vision-Language-Action (VLA) models for autonomous driving show promise but falter in unstructured corner case scenarios, largely due to a scarcity of targeted benchmarks. To address this, we introduce Impromptu VLA. Our core contribution is the Impromptu VLA Dataset: over 80,000 meticulously curated video clips, distilled from over 2M source clips sourced from 8 open-source large-scale datasets. This dataset is built upon our novel taxonomy of four challenging unstructured categories and features rich, planning-oriented question-answering annotations and action trajectories. Crucially, experiments demonstrate that VLAs trained with our dataset achieve substantial performance gains on established benchmarks--improving closed-loop NeuroNCAP scores and collision rates, and reaching near state-of-the-art L2 accuracy in open-loop nuScenes trajectory prediction. Furthermore, our Q&A suite serves as an effective diagnostic, revealing clear VLM improvements in perception, prediction, and planning. Our code, data and models are available at https://github.com/ahydchh/Impromptu-VLA.

Haohan Chi, Huan-ang Gao, Ziming Liu, Jianing Liu, Chenyu Liu, Jinwei Li, Kaisen Yang, Yangcheng Yu, Zeda Wang, Wenyi Li, Leichen Wang, Xingtao Hu, Hao Sun, Hang Zhao, Hao Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Open-loop trajectory predictionNuScenes v1.0 (test)
L2 Error (1s)0.13
29
Open-loop planningnuScenes
L2 Error (1s)0.13
20
Closed-loop simulationNeuroNCAP
NeuroNCAP Score (Avg)2.06
9
Showing 3 of 3 rows

Other info

Follow for update