Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Prompt Learning for Vision-Language Models

About

Contrastive vision-language models like CLIP have shown great progress in transfer learning. In the inference stage, the proper text description, also known as prompt, needs to be carefully designed to correctly classify the given images. In order to avoid laborious prompt engineering, recent works such as CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for downstream image recognition tasks on a small set of labeled data. Though promising improvements are achieved, requiring labeled data from the target datasets may restrict the scalability. In this paper, we explore a different scenario, in which the labels of the target datasets are unprovided, and we present an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning. Experimentally, our UPL outperforms original CLIP with prompt engineering on ImageNet as well as other 10 datasets. An enhanced version of UPL is even competitive with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. Code and models are available at https://github.com/tonyhuang2022/UPL.

Tony Huang, Jack Chu, Fangyun Wei• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Top-1 Accuracy65.8
622
Image ClassificationEuroSAT--
497
Image ClassificationDTD--
487
Image ClassificationFlowers102
Accuracy73.37
478
Image ClassificationSUN397--
425
Image ClassificationDTD
Accuracy44.68
419
Image ClassificationUCF101
Top-1 Acc71.82
404
Image ClassificationFood101
Accuracy84.71
309
Image ClassificationStanfordCars
Accuracy66.19
266
Image ClassificationFGVCAircraft
Accuracy23.67
225
Showing 10 of 20 rows

Other info

Follow for update