Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dual Attention Networks for Multimodal Reasoning and Matching

About

We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.

Hyeonseob Nam, Jung-Woo Ha, Jeonghee Kim• 2016

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU45.3
2731
Text-to-Image RetrievalFlickr30K
R@139.4
460
Image-to-Text RetrievalFlickr30K 1K (test)
R@155
439
Text-to-Image RetrievalFlickr30k (test)
Recall@139.4
423
Text-to-Image RetrievalFlickr30K 1K (test)
R@139.4
375
Image-to-Text RetrievalFlickr30k (test)
R@155
370
Image RetrievalFlickr30k (test)
R@139.4
195
Open-Ended Visual Question AnsweringVQA 1.0 (test-dev)
Overall Accuracy64.3
100
Text RetrievalFlickr30k (test)
R@155
89
Visual Question Answering (Multiple-choice)VQA 1.0 (test-dev)
Accuracy (All)69.1
66
Showing 10 of 18 rows

Other info

Follow for update