Meta CLIP 2: A Worldwide Scaling Recipe
About
Contrastive Language-Image Pretraining (CLIP) is a popular foundation model, supporting from zero-shot classification, retrieval to encoders for multimodal large language models (MLLMs). Although CLIP is successfully trained on billion-scale image-text pairs from the English world, scaling CLIP's training further to learning from the worldwide web data is still challenging: (1) no curation method is available to handle data points from non-English world; (2) the English performance from existing multilingual CLIP is worse than its English-only counterpart, i.e., "curse of multilinguality" that is common in LLMs. Here, we present Meta CLIP 2, the first recipe training CLIP from scratch on worldwide web-scale image-text pairs. To generalize our findings, we conduct rigorous ablations with minimal changes that are necessary to address the above challenges and present a recipe enabling mutual benefits from English and non-English world data. In zero-shot ImageNet classification, Meta CLIP 2 ViT-H/14 surpasses its English-only counterpart by 0.8% and mSigLIP by 0.7%, and surprisingly sets new state-of-the-art without system-level confounding factors (e.g., translation, bespoke architecture changes) on multilingual benchmarks, such as CVQA with 57.4%, Babel-ImageNet with 50.2% and XM3600 with 64.3% on image-to-text retrieval.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@177 | 460 | |
| Image-to-Text Retrieval | Flickr30K | R@191.9 | 379 | |
| Video Action Recognition | Kinetics-400 | Top-1 Acc84 | 184 | |
| Text-to-Image Retrieval | COCO | Recall@147.7 | 130 | |
| Image-to-Text Retrieval | COCO | R@166.8 | 123 | |
| Video Action Recognition | HMDB51 | Top-1 Accuracy78.2 | 103 | |
| Image-to-Text Retrieval | Flickr30K-CN | R@189.3 | 99 | |
| Text-to-Image Retrieval | Flickr30K-CN | R@172.2 | 99 | |
| Action Recognition | SSV2 | Top-1 Acc49.3 | 93 | |
| Text-to-Image Retrieval | DCI | R@150.2 | 68 |