Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Read-only Prompt Optimization for Vision-Language Few-shot Learning

About

In recent years, prompt tuning has proven effective in adapting pre-trained vision-language models to downstream tasks. These methods aim to adapt the pre-trained models by introducing learnable prompts while keeping pre-trained weights frozen. However, learnable prompts can affect the internal representation within the self-attention module, which may negatively impact performance variance and generalization, especially in data-deficient settings. To address these issues, we propose a novel approach, Read-only Prompt Optimization (RPO). RPO leverages masked attention to prevent the internal representation shift in the pre-trained model. Further, to facilitate the optimization of RPO, the read-only prompts are initialized based on special tokens of the pre-trained model. Our extensive experiments demonstrate that RPO outperforms CLIP and CoCoOp in base-to-new generalization and domain generalization while displaying better robustness. Also, the proposed method achieves better generalization on extremely data-deficient settings, while improving parameter efficiency and computational overhead. Code is available at https://github.com/mlvlab/RPO.

Dongjun Lee, Seokwon Song, Jihee Suh, Joonmyung Choi, Sanghyeok Lee, Hyunwoo J.Kim• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationFlowers102--
478
Image ClassificationImageNet
Top-1 Accuracy71.67
324
Image ClassificationFood101--
309
Image ClassificationStanfordCars--
266
Image ClassificationFGVC-Aircraft (test)
Accuracy37.33
231
Image ClassificationFGVCAircraft--
225
Image ClassificationSUN397
Accuracy (Base)80.6
131
Image ClassificationCaltech101
Base Accuracy97.97
129
Image ClassificationCaltech101 (test)--
121
Image ClassificationOxfordPets
Base Accuracy94.63
117
Showing 10 of 46 rows

Other info

Code

Follow for update