Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LiT: Zero-Shot Transfer with Locked-image text Tuning

About

This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text models while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image models with unlocked text models work best. We call this instance of contrastive-tuning "Locked-image Tuning" (LiT), which just teaches a text model to read out good representations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsupervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 85.2% zero-shot transfer accuracy on the ImageNet test set, and 82.5% on the challenging out-of-distribution ObjectNet test set.

Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc81.8
654
Image ClassificationImageNet V2
Top-1 Acc79.8
611
Image ClassificationImageNet-R
Top-1 Acc94.9
529
Image-to-Text RetrievalFlickr30K 1K (test)
R@183.9
491
Image ClassificationUCF101
Top-1 Acc60
455
Text-to-Image RetrievalFlickr30k (test)
Recall@166.5
445
Text-to-Image RetrievalFlickr30K 1K (test)
R@166.5
432
Image ClassificationImageNet--
431
ClassificationCars
Accuracy24.3
395
Image-to-Text RetrievalFlickr30k (test)
R@183.9
392
Showing 10 of 72 rows
...

Other info

Code

Follow for update