MAFA: Managing False Negatives for Vision-Language Pre-training
About
We consider a critical issue of false negatives in Vision-Language Pre-training (VLP), a challenge that arises from the inherent many-to-many correspondence of image-text pairs in large-scale web-crawled datasets. The presence of false negatives can impede achieving optimal performance and even lead to a significant performance drop. To address this challenge, we propose MAFA (MAnaging FAlse negatives), which consists of two pivotal components building upon the recently developed GRouped mIni-baTch sampling (GRIT) strategy: 1) an efficient connection mining process that identifies and converts false negatives into positives, and 2) label smoothing for the image-text contrastive (ITC) loss. Our comprehensive experiments verify the effectiveness of MAFA across multiple downstream tasks, emphasizing the crucial role of addressing false negatives in VLP, potentially even surpassing the importance of addressing false positives. In addition, the compatibility of MAFA with the recent BLIP-family model is also demonstrated. Code is available at https://github.com/jaeseokbyun/MAFA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning | MS COCO Karpathy (test) | CIDEr125.4 | 682 | |
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy75.91 | 664 | |
| Visual Question Answering | VQA v2 (test-std) | Accuracy75.93 | 466 | |
| Image-to-Text Retrieval | Flickr30K 1K (test) | R@196.2 | 439 | |
| Text-to-Image Retrieval | Flickr30k (test) | Recall@184.9 | 423 | |
| Text-to-Image Retrieval | Flickr30K 1K (test) | R@184.9 | 375 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@196.2 | 370 | |
| Natural Language Visual Reasoning | NLVR2 (test-p) | Accuracy82.16 | 327 | |
| Visual Question Answering | OK-VQA (test) | Accuracy29 | 296 | |
| Natural Language Visual Reasoning | NLVR2 (dev) | Accuracy82.66 | 288 |