Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models

About

Despite noise and caption quality having been acknowledged as important factors impacting vision-language contrastive pre-training, in this paper, we show that the full potential of improving the training process by addressing such issues is yet to be realized. Specifically, we firstly study and analyze two issues affecting training: incorrect assignment of negative pairs, and low caption quality and diversity. Then, we devise effective solutions for addressing both problems, which essentially require training with multiple true positive pairs. Finally, we propose training with sigmoid loss to address such a requirement. We show very large gains over the current state-of-the-art for both image recognition ($\sim +6\%$ on average over 11 datasets) and image retrieval ($\sim +19\%$ on Flickr30k and $\sim +15\%$ on MSCOCO).

Adrian Bulat, Yassine Ouali, Georgios Tzimiropoulos• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood-101
Accuracy79.8
494
Image ClassificationDTD
Accuracy51.1
487
Image ClassificationStanford Cars
Accuracy47.3
477
Image ClassificationSUN397
Accuracy68.7
425
Text-to-Image RetrievalFlickr30k (test)
Recall@172.9
423
Image-to-Text RetrievalFlickr30k (test)
R@187.9
370
Image ClassificationCIFAR100
Accuracy73.7
331
ClassificationCars
Accuracy16.3
314
Image ClassificationAircraft
Accuracy4.6
302
Image ClassificationOxford-IIIT Pets
Accuracy79.9
259
Showing 10 of 26 rows

Other info

Follow for update