Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Visual Prompts for Adapting Large-Scale Models

About

We investigate the efficacy of visual prompting to adapt large-scale models in vision. Following the recent approach from prompt tuning and adversarial reprogramming, we learn a single image perturbation such that a frozen model prompted with this perturbation performs a new task. Through comprehensive experiments, we demonstrate that visual prompting is particularly effective for CLIP and robust to distribution shift, achieving performance competitive with standard linear probes. We further analyze properties of the downstream dataset, prompt design, and output transformation in regard to adaptation performance. The surprising effectiveness of visual prompting provides a new perspective on adapting pre-trained models in vision. Code is available at http://hjbahng.github.io/visual_prompting .

Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, Phillip Isola• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationEuroSAT
Accuracy90.8
497
Image ClassificationFood-101
Accuracy81.8
494
Image ClassificationSUN397
Accuracy67.1
425
Image ClassificationUCF101
Top-1 Acc74.2
404
Action RecognitionUCF101
Accuracy67.9
365
Image ClassificationSVHN
Accuracy91.3
359
Image ClassificationImageNet
Top-1 Accuracy67.4
324
Image ClassificationFood101
Accuracy78.1
309
Image ClassificationStanfordCars
Accuracy55.8
266
Image ClassificationRESISC45
Accuracy81.4
263
Showing 10 of 44 rows

Other info

Follow for update