Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MAttNet: Modular Attention Network for Referring Expression Comprehension

About

In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo and code are provided.

Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, Tamara L.Berg• 2018

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy71.01
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy85.65
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.8526
333
Referring Expression ComprehensionRefCOCOg (test)
Accuracy78.12
291
Referring Expression ComprehensionRefCOCOg (val)
Accuracy78.1
291
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy56.02
235
Referring Expression SegmentationRefCOCO (testA)
cIoU62.37
217
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy71.62
207
Referring Expression SegmentationRefCOCO+ (val)
cIoU46.67
201
Referring Image SegmentationRefCOCO+ (test-B)
mIoU40.08
200
Showing 10 of 110 rows
...

Other info

Code

Follow for update