Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visual Grounding with Transformers

About

In this paper, we propose a transformer based approach for visual grounding. Unlike previous proposal-and-rank frameworks that rely heavily on pretrained object detectors or proposal-free frameworks that upgrade an off-the-shelf one-stage detector by fusing textual embeddings, our approach is built on top of a transformer encoder-decoder and is independent of any pretrained detectors or word embedding models. Termed VGTR -- Visual Grounding with TRansformers, our approach is designed to learn semantic-discriminative visual features under the guidance of the textual description without harming their location ability. This information flow enables our VGTR to have a strong capability in capturing context-level semantics of both vision and language modalities, rendering us to aggregate accurate visual clues implied by the description to locate the interested object instance. Experiments show that our method outperforms state-of-the-art proposal-free approaches by a considerable margin on five benchmarks while maintaining fast inference speed.

Ye Du, Zehua Fu, Qingjie Liu, Yunhong Wang• 2021

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy63.91
354
Referring Expression ComprehensionRefCOCO (val)
Accuracy78.29
344
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.8149
342
Referring Expression ComprehensionRefCOCOg (test)
Accuracy67.23
300
Referring Expression ComprehensionRefCOCOg (val)
Accuracy65.73
300
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy56.51
244
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy70.09
216
Referring Expression ComprehensionRefCOCO (testB)
Accuracy73.78
205
Referring Expression ComprehensionRefCOCOg (test(U))
Precision67.23
71
Referring Expression ComprehensionRefCOCOg (val (U))
Accuracy64.19
57
Showing 10 of 23 rows

Other info

Follow for update