Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Unsupervised Prompt Learning for Vision-Language Models

About

Contrastive vision-language models like CLIP have shown great progress in transfer learning. In the inference stage, the proper text description, also known as prompt, needs to be carefully designed to correctly classify the given images. In order to avoid laborious prompt engineering, recent works such as CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for downstream image recognition tasks on a small set of labeled data. Though promising improvements are achieved, requiring labeled data from the target datasets may restrict the scalability. In this paper, we explore a different scenario, in which the labels of the target datasets are unprovided, and we present an unsupervised prompt learning (UPL) approach to avoid prompt engineering while simultaneously improving transfer performance of CLIP-like vision-language models. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning. Experimentally, our UPL outperforms original CLIP with prompt engineering on ImageNet as well as other 10 datasets. An enhanced version of UPL is even competitive with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. Code and models are available at https://github.com/tonyhuang2022/UPL.

Tony Huang, Jack Chu, Fangyun Wei• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100--
691
Image ClassificationEuroSAT--
569
Image ClassificationFlowers102
Accuracy73.37
558
Image ClassificationDTD--
542
Image ClassificationDTD
Accuracy44.68
485
Image ClassificationFood101
Accuracy84.71
457
Image ClassificationUCF101
Top-1 Acc71.82
455
Image ClassificationSUN397--
425
Image ClassificationStanfordCars
Accuracy66.19
312
Image ClassificationFGVCAircraft
Accuracy23.67
261
Showing 10 of 20 rows

Other info

Follow for update