Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visual Prompt Tuning

About

The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.

Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU49.9
2888
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy83.58
1952
Mathematical ReasoningGSM8K
Accuracy75.66
1362
Image ClassificationImageNet 1k (test)
Top-1 Accuracy81.68
848
Image Super-resolutionManga109
PSNR23.98
821
Image ClassificationImageNet A
Top-1 Acc35.17
654
Image ClassificationStanford Cars
Accuracy83.6
635
Image ClassificationImageNet V2
Top-1 Acc68.51
611
Image ClassificationEuroSAT
Accuracy62.24
569
Image Super-resolutionSet5 (test)
PSNR32.71
566
Showing 10 of 340 rows
...

Other info

Code

Follow for update