Exploring Models and Data for Image Question Answering
About
This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.
Mengye Ren, Ryan Kiros, Richard Zemel• 2015
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA (test-dev) | Acc (All)53.7 | 147 | |
| Visual Question Answering | COCO-QA (test) | WUPS (IoU=0.9)67.9 | 51 | |
| Image Question Answering | DAQUAR REDUCED (test) | Accuracy35.8 | 33 | |
| Video Question Answering | SUTD-TrafficQA (Setting-1 2) | Accuracy54.25 | 26 | |
| Video Question Answering | TrafficQA Setting-1/4 (test) | Accuracy29.91 | 15 | |
| Visual Question Answering | FVQA (test) | Top-1 Acc24.98 | 14 | |
| Fact-based Visual Question Answering | FVQA 1.0 (test) | WUPS@0.0 (Top-1)63.42 | 13 | |
| Fact-based Visual Question Answering | FVQA (test) | Top-1 WUPS@0.931.96 | 13 | |
| Video Question Answering | TGIF-QA original (test) | Repetition Count Loss (Mean L2)4.8095 | 13 | |
| Video Question Answering | SUTD-TrafficQA Setting-1/4 | Accuracy29.91 | 12 |
Showing 10 of 18 rows