Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LiT: Zero-Shot Transfer with Locked-image text Tuning

About

This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image models with unlocked text models work best. We call this instance of contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model to read out good representations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsupervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 85.2% zero-shot transfer accuracy on the ImageNet test set, and 82.5% on the challenging out-of-distribution ObjectNet test set.

Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc81.8
553
Image ClassificationImageNet V2
Top-1 Acc79.8
487
Image ClassificationImageNet-R
Top-1 Acc94.9
474
Image-to-Text RetrievalFlickr30K 1K (test)
R@183.9
439
Image ClassificationImageNet--
429
Text-to-Image RetrievalFlickr30k (test)
Recall@166.5
423
Image ClassificationUCF101
Top-1 Acc60
404
Text-to-Image RetrievalFlickr30K 1K (test)
R@166.5
375
Image-to-Text RetrievalFlickr30k (test)
R@183.9
370
Image ClassificationImageNet
Top-1 Accuracy85.2
324
Showing 10 of 69 rows

Other info

Code

Follow for update