Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Models and Data for Image Question Answering

About

This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.

Mengye Ren, Ryan Kiros, Richard Zemel• 2015

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA (test-dev)
Acc (All)53.7
147
Visual Question AnsweringCOCO-QA (test)
WUPS (IoU=0.9)67.9
51
Image Question AnsweringDAQUAR REDUCED (test)
Accuracy35.8
33
Video Question AnsweringSUTD-TrafficQA (Setting-1 2)
Accuracy54.25
26
Video Question AnsweringTrafficQA Setting-1/4 (test)
Accuracy29.91
15
Visual Question AnsweringFVQA (test)
Top-1 Acc24.98
14
Fact-based Visual Question AnsweringFVQA 1.0 (test)
WUPS@0.0 (Top-1)63.42
13
Fact-based Visual Question AnsweringFVQA (test)
Top-1 WUPS@0.931.96
13
Video Question AnsweringTGIF-QA original (test)
Repetition Count Loss (Mean L2)4.8095
13
Video Question AnsweringSUTD-TrafficQA Setting-1/4
Accuracy29.91
12
Showing 10 of 18 rows

Other info

Code

Follow for update