Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EVA-CLIP: Improved Training Techniques for CLIP at Scale

About

Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at https://github.com/baaivision/EVA/tree/master/EVA-CLIP.

Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, Yue Cao• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy82
1952
Visual Question AnsweringVizWiz
Accuracy45.28
1525
Object Hallucination EvaluationPOPE--
1455
Visual Question AnsweringVQA v2
Accuracy70.42
1362
Visual Question AnsweringGQA
Accuracy57.93
1249
Image ClassificationImageNet-1K
Top-1 Acc82.1
1239
Image ClassificationImageNet 1k (test)
Top-1 Accuracy82
848
Image ClassificationCIFAR-100--
691
Image ClassificationImageNet A
Top-1 Acc82.9
654
Multimodal UnderstandingMMBench--
637
Showing 10 of 180 rows
...

Other info

Code

Follow for update