Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounding Referring Expressions in Images by Variational Context

About

We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., "largest elephant standing behind baby elephant". This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context --- visual attributes (e.g., "largest", "baby") and relationships (e.g., "behind") that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences the estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced, resulting in better localization of referent. We develop a novel cue-specific language-vision embedding network that learns this reciprocity model end-to-end. We also extend the model to the unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings.

Hanwang Zhang, Yulei Niu, Shih-Fu Chang• 2017

Related benchmarks

TaskDatasetResultRank
Visual GroundingRefCOCO+ (testB)
Accuracy53.2
169
Visual GroundingRefCOCO+ (testA)
Accuracy58.4
168
Visual GroundingRefCOCO (testB)
Accuracy67.4
125
Visual GroundingRefCOCO (testA)
Accuracy73.3
117
Visual GroundingReferCOCO v1 (testB)
Acc @ 0.567.44
30
Visual GroundingReferItGame (test)
Pr@0.50.3113
26
Visual GroundingReferCOCO+ v1 (testA)
Acc@0.558.4
24
Referring Expression Object SegmentationRefCOCOg UMD (val)--
20
Visual GroundingReferCOCOg Google (val)
Accuracy @ 0.5 IoU62.3
16
Visual GroundingReferItGame (test)--
14
Showing 10 of 12 rows

Other info

Follow for update