Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Diversified and Discriminative Proposal Generation for Visual Grounding

About

Visual grounding aims to localize an object in an image referred to by a textual query phrase. Various visual grounding approaches have been proposed, and the problem can be modularized into a general framework: proposal generation, multi-modal feature representation, and proposal ranking. Of these three modules, most existing approaches focus on the latter two, with the importance of proposal generation generally neglected. In this paper, we rethink the problem of what properties make a good proposal generator. We introduce the diversity and discrimination simultaneously when generating proposals, and in doing so propose Diversified and Discriminative Proposal Networks model (DDPN). Based on the proposals generated by DDPN, we propose a high performance baseline model for visual grounding and evaluate it on four benchmark datasets. Experimental results demonstrate that our model delivers significant improvements on all the tested data-sets (e.g., 18.8\% improvement on ReferItGame and 8.2\% improvement on Flickr30k Entities over the existing state-of-the-arts respectively)

Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, Dacheng Tao• 2018

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)--
354
Referring Expression ComprehensionRefCOCO (val)--
344
Referring Expression ComprehensionRefCOCO (testA)--
342
Referring Expression ComprehensionRefCOCO+ (testB)--
244
Referring Expression ComprehensionRefCOCO+ (testA)--
216
Visual GroundingRefCOCO+ (val)
Accuracy64.8
212
Visual GroundingRefCOCO+ (testA)
Accuracy70.5
206
Referring Expression ComprehensionRefCOCO (testB)--
205
Visual GroundingRefCOCO+ (testB)
Accuracy54.1
180
Visual GroundingRefCOCO (val)
Accuracy76.8
147
Showing 10 of 27 rows

Other info

Follow for update