EVA-CLIP: Improved Training Techniques for CLIP at Scale
About
Contrastive language-image pre-training, CLIP for short, has gained increasing attention for its potential in various scenarios. In this paper, we propose EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training. Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs. Notably, our largest 5.0B-parameter EVA-02-CLIP-E/14+ with only 9 billion seen samples achieves 82.0 zero-shot top-1 accuracy on ImageNet-1K val. A smaller EVA-02-CLIP-L/14+ with only 430 million parameters and 6 billion seen samples achieves 80.4 zero-shot top-1 accuracy on ImageNet-1K val. To facilitate open access and open research, we release the complete suite of EVA-CLIP to the community at https://github.com/baaivision/EVA/tree/master/EVA-CLIP.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1K 1.0 (val) | Top-1 Accuracy82 | 1866 | |
| Image Classification | ImageNet-1K | Top-1 Acc82.1 | 836 | |
| Image Classification | ImageNet 1k (test) | Top-1 Accuracy82 | 798 | |
| Image Classification | CIFAR-100 | Top-1 Accuracy93.2 | 622 | |
| Image Classification | ImageNet A | Top-1 Acc82.9 | 553 | |
| Image Classification | ImageNet-1K | Top-1 Acc82 | 524 | |
| Image Classification | ImageNet-1k (val) | Top-1 Accuracy88.1 | 512 | |
| Image Classification | EuroSAT | -- | 497 | |
| Image Classification | Food-101 | -- | 494 | |
| Image Classification | ImageNet V2 | Top-1 Acc75.7 | 487 |