Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Modeling Caption Diversity in Contrastive Vision-Language Pretraining

About

There are a thousand ways to caption an image. Contrastive Language Pretraining (CLIP) on the other hand, works by mapping an image and its caption to a single vector -- limiting how well CLIP-like models can represent the diverse ways to describe an image. In this work, we introduce Llip, Latent Language Image Pretraining, which models the diversity of captions that could match an image. Llip's vision encoder outputs a set of visual features that are mixed into a final representation by conditioning on information derived from the text. We show that Llip outperforms non-contextualized baselines like CLIP and SigLIP on a variety of tasks even with large-scale encoders. Llip improves zero-shot classification by an average of 2.9% zero-shot classification benchmarks with a ViT-G/14 encoder. Specifically, Llip attains a zero-shot top-1 accuracy of 83.5% on ImageNet outperforming a similarly sized CLIP by 1.4%. We also demonstrate improvement on zero-shot retrieval on MS-COCO by 6.0%. We provide a comprehensive analysis of the components introduced by the method and demonstrate that Llip leads to richer visual representations.

Samuel Lavoie, Polina Kirichenko, Mark Ibrahim, Mahmoud Assran, Andrew Gordon Wilson, Aaron Courville, Nicolas Ballas• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10--
507
Image ClassificationFood-101--
494
Image ClassificationDTD--
487
Image ClassificationFlowers102
Accuracy74.8
478
Image ClassificationStanford Cars--
477
Image ClassificationCIFAR-10--
471
Image ClassificationImageNet
Top-1 Accuracy67.5
429
Image ClassificationSUN397--
425
Image-to-Text RetrievalFlickr30K
R@193.2
379
Image ClassificationImageNet
Top-1 Accuracy75.3
324
Showing 10 of 23 rows

Other info

Follow for update