Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Prompt Search

About

The size of vision models has grown exponentially over the last few years, especially after the emergence of Vision Transformer. This has motivated the development of parameter-efficient tuning methods, such as learning adapter layers or visual prompt tokens, which allow a tiny portion of model parameters to be trained whereas the vast majority obtained from pre-training are frozen. However, designing a proper tuning method is non-trivial: one might need to try out a lengthy list of design choices, not to mention that each downstream dataset often requires custom designs. In this paper, we view the existing parameter-efficient tuning methods as "prompt modules" and propose Neural prOmpt seArcH (NOAH), a novel approach that learns, for large vision models, the optimal design of prompt modules through a neural architecture search algorithm, specifically for each downstream dataset. By conducting extensive experiments on over 20 vision datasets, we demonstrate that NOAH (i) is superior to individual prompt modules, (ii) has a good few-shot learning ability, and (iii) is domain-generalizable. The code and models are available at https://github.com/Davidzhangyuanhan/NOAH.

Yuanhan Zhang, Kaiyang Zhou, Ziwei Liu• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationVTAB 1K
Overall Mean Accuracy75.48
204
Image ClassificationImageNet V2 (test)
Top-1 Accuracy66.1
181
Image ClassificationImageNet-A (test)--
154
Image ClassificationImageNet-Sketch (test)--
132
Image ClassificationVTAB 1k (test)--
121
Image ClassificationImageNet-R (test)
Accuracy28.5
105
Image ClassificationVTAB-1K 1.0 (test)--
102
Visual Task AdaptationVTAB 1K
Average Accuracy75.5
78
Image ClassificationImageNet Domain Generalization (Source: ImageNet, Targets: ImageNetV2, ImageNet-Sketch, ImageNet-A, ImageNet-R) (test)
Accuracy (ImageNetV2)66.1
53
Image ClassificationImageNet V2 (Target)
Accuracy66.1
42
Showing 10 of 17 rows

Other info

Code

Follow for update