Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering

About

We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single "hop" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].

Huijuan Xu, Kate Saenko• 2015

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)--
682
Visual Question AnsweringVQA (test-dev)
Acc (All)57.99
147
Visual Question AnsweringVQA (test-std)--
110
Open-Ended Visual Question AnsweringVQA 1.0 (test-dev)
Overall Accuracy58
100
Open-Ended Visual Question AnsweringVQA 1.0 (test-standard)
Overall Accuracy58.24
50
Visual Question AnswerVQA 1.0 (test-dev)
Overall Accuracy58
44
Image Question AnsweringDAQUAR REDUCED (test)
Accuracy40.07
33
Open-Ended Visual Question AnsweringVQA (test-standard)
Accuracy (Overall)58.2
32
Visual Question AnsweringVQA 1 (test-standard)
VQA Open-Ended Accuracy (All)58.24
28
Visual Question AnsweringDAQUAR (reduced)
Accuracy40.07
8
Showing 10 of 11 rows

Other info

Follow for update