Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LingoQA: Visual Question Answering for Autonomous Driving

About

We introduce LingoQA, a novel dataset and benchmark for visual question answering in autonomous driving. The dataset contains 28K unique short video scenarios, and 419K annotations. Evaluating state-of-the-art vision-language models on our benchmark shows that their performance is below human capabilities, with GPT-4V responding truthfully to 59.6% of the questions compared to 96.6% for humans. For evaluation, we propose a truthfulness classifier, called Lingo-Judge, that achieves a 0.95 Spearman correlation coefficient to human evaluations, surpassing existing techniques like METEOR, BLEU, CIDEr, and GPT-4. We establish a baseline vision-language model and run extensive ablation studies to understand its performance. We release our dataset and benchmark as an evaluation platform for vision-language models in autonomous driving.

Ana-Maria Marcu, Long Chen, Jan H\"unermann, Alice Karnsund, Benoit Hanotte, Prajwal Chidananda, Saurabh Nair, Vijay Badrinarayanan, Alex Kendall, Jamie Shotton, Elahe Arani, Oleg Sinavski• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringLingoQA (test)
Ling-Judge60.8
8
Autonomous Driving Question AnsweringLingoQA (val)
Lingo-J60.8
6
Showing 2 of 2 rows

Other info

Follow for update