Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Image Captioners Are Scalable Vision Learners Too

About

Contrastive pretraining on image-text pairs from the web is one of the most popular large-scale pretraining strategies for vision backbones, especially in the context of large multimodal models. At the same time, image captioning on this type of data is commonly considered an inferior pretraining strategy. In this paper, we perform a fair comparison of these two pretraining strategies, carefully matching training data, compute, and model capacity. Using a standard encoder-decoder transformer, we find that captioning alone is surprisingly effective: on classification tasks, captioning produces vision encoders competitive with contrastively pretrained encoders, while surpassing them on vision & language tasks. We further analyze the effect of the model architecture and scale, as well as the pretraining data on the representation quality, and find that captioning exhibits the same or better scaling behavior along these axes. Overall our results show that plain image captioning is a more powerful pretraining strategy than was previously believed.

Michael Tschannen, Manoj Kumar, Andreas Steiner, Xiaohua Zhai, Neil Houlsby, Lucas Beyer• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2--
1165
Image ClassificationFood-101
Accuracy93.8
494
Image ClassificationStanford Cars
Accuracy92.2
477
Image ClassificationImageNet 1k (test)
Top-1 Accuracy87.7
359
Image ClassificationCIFAR100
Accuracy72.9
331
Image ClassificationImageNet
Top-1 Accuracy70.6
324
ClassificationCars
Accuracy95.8
314
Image ClassificationRESISC45--
263
Image ClassificationSUN397
Accuracy85.2
246
Image ClassificationImageNet-1k 1.0 (test)
Top-1 Accuracy83
197
Showing 10 of 32 rows

Other info

Code

Follow for update