Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters

About

Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional 80.7% zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models.

Quan Sun, Jinsheng Wang, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Xinlong Wang• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc83.8
1239
Image ClassificationCIFAR-100--
691
Image ClassificationStanford Cars--
635
Image ClassificationImageNet V2
Top-1 Acc77.9
611
Image ClassificationEuroSAT--
569
Image ClassificationImageNet-1k (val)
Top-1 Accuracy88.9
543
Image ClassificationFood-101--
542
Image ClassificationDTD--
542
Action RecognitionKinetics-400--
481
Image ClassificationSUN397--
425
Showing 10 of 71 rows
...

Other info

Code

Follow for update