Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Language-Image Pre-training via Masking

About

We present Fast Language-Image Pre-training (FLIP), a simple and more efficient method for training CLIP. Our method randomly masks out and removes a large portion of image patches during training. Masking allows us to learn from more image-text pairs given the same wall-clock time and contrast more samples per iteration with similar memory footprint. It leads to a favorable trade-off between accuracy and training time. In our experiments on 400 million image-text pairs, FLIP improves both accuracy and speed over the no-masking baseline. On a large diversity of downstream tasks, FLIP dominantly outperforms the CLIP counterparts trained on the same data. Facilitated by the speedup, we explore the scaling behavior of increasing the model size, data size, or training length, and report encouraging results and comparisons. We hope that our work will foster future research on scaling vision-language learning.

Yanghao Li, Haoqi Fan, Ronghang Hu, Christoph Feichtenhofer, Kaiming He• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc61.3
836
Image ClassificationImageNet 1k (test)
Top-1 Accuracy86.9
798
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy74.7
664
Multimodal EvaluationMME--
557
Image ClassificationImageNet A
Top-1 Acc71.9
553
Image ClassificationImageNet-1K--
524
Image ClassificationCIFAR-10
Accuracy85.9
507
Image ClassificationEuroSAT
Accuracy94.1
497
Image ClassificationDTD
Accuracy60.4
487
Image ClassificationImageNet V2
Top-1 Acc66.8
487
Showing 10 of 109 rows
...

Other info

Code

Follow for update