Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stacked Attention Networks for Image Question Answering

About

This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.

Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola• 2015

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy63
664
Visual Question AnsweringVQA (test-dev)
Acc (All)58.7
147
Visual Question AnsweringVQA 2.0 (val)
Accuracy (Overall)61.7
143
Visual DialogVisDial v0.9 (val)
MRR57.64
141
Visual Question AnsweringVQA (test-std)--
110
Visual Question AnsweringVQA-CP v2 (test)
Overall Accuracy24.96
109
Open-Ended Visual Question AnsweringVQA 1.0 (test-dev)
Overall Accuracy58.7
100
Visual Question AnsweringVQA v2 (val)
Accuracy55.61
99
Visual Question AnsweringCLEVR (test)
Overall Accuracy68.5
61
Visual DialogVisDial v0.9 (test)
MRR57.64
58
Showing 10 of 44 rows

Other info

Follow for update