Contrastive Learning for Weakly Supervised Phrase Grounding
About
Phrase grounding, the problem of associating image regions to caption words, is a crucial component of vision-language tasks. We show that phrase grounding can be learned by optimizing word-region attention to maximize a lower bound on mutual information between images and caption words. Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions. A key idea is to construct effective negative captions for learning through language model guided word substitutions. Training with our negatives yields a $\sim10\%$ absolute gain in accuracy over randomly-sampled negatives from the training data. Our weakly supervised phrase grounding model trained on COCO-Captions shows a healthy gain of $5.7\%$ to achieve $76.7\%$ accuracy on Flickr30K Entities benchmark.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Grounding | RefCOCO+ (testB) | Accuracy41.11 | 169 | |
| Visual Grounding | RefCOCO+ (testA) | Accuracy39.8 | 168 | |
| Visual Grounding | Who's Waldo (test) | Accuracy41.1 | 31 | |
| Visual Grounding | Flickr30K Entities (test) | Accuracy76.74 | 29 | |
| Phrase grounding | Flickr30K | -- | 20 | |
| Interaction Localization | YouCook2 Interactions 1.0 (test) | Localization Accuracy24.04 | 8 |