Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MAFA: Managing False Negatives for Vision-Language Pre-training

About

We consider a critical issue of false negatives in Vision-Language Pre-training (VLP), a challenge that arises from the inherent many-to-many correspondence of image-text pairs in large-scale web-crawled datasets. The presence of false negatives can impede achieving optimal performance and even lead to a significant performance drop. To address this challenge, we propose MAFA (MAnaging FAlse negatives), which consists of two pivotal components building upon the recently developed GRouped mIni-baTch sampling (GRIT) strategy: 1) an efficient connection mining process that identifies and converts false negatives into positives, and 2) label smoothing for the image-text contrastive (ITC) loss. Our comprehensive experiments verify the effectiveness of MAFA across multiple downstream tasks, emphasizing the crucial role of addressing false negatives in VLP, potentially even surpassing the importance of addressing false positives. In addition, the compatibility of MAFA with the recent BLIP-family model is also demonstrated. Code is available at https://github.com/jaeseokbyun/MAFA.

Jaeseok Byun, Dohoon Kim, Taesup Moon• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy75.91
706
Image CaptioningMS COCO Karpathy (test)
CIDEr125.4
682
Image-to-Text RetrievalFlickr30K 1K (test)
R@196.2
491
Visual Question AnsweringVQA v2 (test-std)
Accuracy75.93
486
Text-to-Image RetrievalFlickr30k (test)
Recall@184.9
445
Text-to-Image RetrievalFlickr30K 1K (test)
R@184.9
432
Image-to-Text RetrievalFlickr30k (test)
R@196.2
392
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy82.16
346
Visual Question AnsweringOK-VQA (test)
Accuracy29
327
Text-to-Image RetrievalMSCOCO 5K (test)
R@161.6
308
Showing 10 of 24 rows

Other info

Follow for update