Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MAFA: Managing False Negatives for Vision-Language Pre-training

About

We consider a critical issue of false negatives in Vision-Language Pre-training (VLP), a challenge that arises from the inherent many-to-many correspondence of image-text pairs in large-scale web-crawled datasets. The presence of false negatives can impede achieving optimal performance and even lead to a significant performance drop. To address this challenge, we propose MAFA (MAnaging FAlse negatives), which consists of two pivotal components building upon the recently developed GRouped mIni-baTch sampling (GRIT) strategy: 1) an efficient connection mining process that identifies and converts false negatives into positives, and 2) label smoothing for the image-text contrastive (ITC) loss. Our comprehensive experiments verify the effectiveness of MAFA across multiple downstream tasks, emphasizing the crucial role of addressing false negatives in VLP, potentially even surpassing the importance of addressing false positives. In addition, the compatibility of MAFA with the recent BLIP-family model is also demonstrated. Code is available at https://github.com/jaeseokbyun/MAFA.

Jaeseok Byun, Dohoon Kim, Taesup Moon• 2023

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr125.4
682
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy75.91
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy75.93
466
Image-to-Text RetrievalFlickr30K 1K (test)
R@196.2
439
Text-to-Image RetrievalFlickr30k (test)
Recall@184.9
423
Text-to-Image RetrievalFlickr30K 1K (test)
R@184.9
375
Image-to-Text RetrievalFlickr30k (test)
R@196.2
370
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy82.16
327
Visual Question AnsweringOK-VQA (test)
Accuracy29
296
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy82.66
288
Showing 10 of 20 rows

Other info

Follow for update