Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DePT: Decoupled Prompt Tuning

About

This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning, i.e., the better the tuned model generalizes to the base (or target) task, the worse it generalizes to new tasks, and vice versa. Specifically, through an in-depth analysis of the learned features of the base and new tasks, we observe that the BNT stems from a channel bias issue, i.e., the vast majority of feature channels are occupied by base-specific knowledge, resulting in the collapse of taskshared knowledge important to new tasks. To address this, we propose the Decoupled Prompt Tuning (DePT) framework, which decouples base-specific knowledge from feature channels into an isolated feature space during prompt tuning, so as to maximally preserve task-shared knowledge in the original feature space for achieving better zero-shot generalization on new tasks. Importantly, our DePT is orthogonal to existing prompt tuning methods, hence it can improve all of them. Extensive experiments on 11 datasets show the strong flexibility and effectiveness of DePT. Our code and pretrained models are available at https://github.com/Koorye/DePT.

Ji Zhang, Shihan Wu, Lianli Gao, Heng Tao Shen, Jingkuan Song• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet
Top-1 Accuracy72.77
324
Image ClassificationStanford Cars (test)
Accuracy66.23
306
Image ClassificationFGVC-Aircraft (test)
Accuracy24.3
231
Image ClassificationDTD (test)
Accuracy46.6
181
Image ClassificationSUN397 (test)
Top-1 Accuracy67.3
136
Image ClassificationFlowers-102 (test)
Top-1 Accuracy72.17
124
Image ClassificationCaltech101 (test)
Accuracy94.23
121
Multimodal Multilabel ClassificationMM-IMDB (test)
Macro F152.13
87
Image ClassificationFood101 (test)--
87
Base-to-New GeneralizationDTD
Base Accuracy85.07
68
Showing 10 of 53 rows

Other info

Code

Follow for update