Fine-Grained Semantically Aligned Vision-Language Pre-Training
About
Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks. Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and texts, or advanced cross-modal attention upon image and text features. However, they fail to explicitly learn the fine-grained semantic alignment between visual regions and textual phrases, as only global image-text alignment information is available. In this paper, we introduce LOUPE, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. To efficiently compute the game-theoretic interactions, we further propose an uncertainty-aware neural Shapley interaction learning module. Experiments show that LOUPE achieves state-of-the-art performance on a variety of vision-language tasks. Furthermore, without any object-level human annotations and fine-tuning, LOUPE achieves competitive performance on object detection and visual grounding. More importantly, LOUPE opens a new promising direction of learning fine-grained semantics from large-scale raw image-text pairs. The repository of this work is at https://github.com/YYJMJC/LOUPE.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning | MS COCO Karpathy (test) | CIDEr1.378 | 682 | |
| Image Classification | Food-101 | -- | 494 | |
| Image Classification | Stanford Cars | -- | 477 | |
| Text-to-Image Retrieval | Flickr30K | R@176.3 | 460 | |
| Image Classification | ImageNet | Top-1 Accuracy85.7 | 429 | |
| Image Classification | SUN397 | -- | 425 | |
| Image Classification | Aircraft | Accuracy80.2 | 302 | |
| Visual Grounding | RefCOCO+ (val) | Accuracy22.9 | 171 | |
| Visual Grounding | RefCOCO+ (testB) | Accuracy23.6 | 169 | |
| Visual Grounding | RefCOCO+ (testA) | Accuracy23.3 | 168 |