Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Learning for Weakly Supervised Phrase Grounding

About

Phrase grounding, the problem of associating image regions to caption words, is a crucial component of vision-language tasks. We show that phrase grounding can be learned by optimizing word-region attention to maximize a lower bound on mutual information between images and caption words. Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions. A key idea is to construct effective negative captions for learning through language model guided word substitutions. Training with our negatives yields a $\sim10\%$ absolute gain in accuracy over randomly-sampled negatives from the training data. Our weakly supervised phrase grounding model trained on COCO-Captions shows a healthy gain of $5.7\%$ to achieve $76.7\%$ accuracy on Flickr30K Entities benchmark.

Tanmay Gupta, Arash Vahdat, Gal Chechik, Xiaodong Yang, Jan Kautz, Derek Hoiem• 2020

Related benchmarks

TaskDatasetResultRank
Visual GroundingRefCOCO+ (testB)
Accuracy41.11
169
Visual GroundingRefCOCO+ (testA)
Accuracy39.8
168
Visual GroundingWho's Waldo (test)
Accuracy41.1
31
Visual GroundingFlickr30K Entities (test)
Accuracy76.74
29
Phrase groundingFlickr30K--
20
Interaction LocalizationYouCook2 Interactions 1.0 (test)
Localization Accuracy24.04
8
Showing 6 of 6 rows

Other info

Follow for update