Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prompt Distribution Learning

About

We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks. Our method not only learns low-bias prompts from a few samples but also captures the distribution of diverse prompts to handle the varying visual representations. In this way, we provide high-quality task-related content for facilitating recognition. This prompt distribution learning is realized by an efficient approach that learns the output embeddings of prompts instead of the input embeddings. Thus, we can employ a Gaussian distribution to model them effectively and derive a surrogate loss for efficient training. Extensive experiments on 12 datasets demonstrate that our method consistently and significantly outperforms existing methods. For example, with 1 sample per category, it relatively improves the average result by 9.1% compared to human-crafted prompts.

Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy65.3
798
Image ClassificationFlowers102--
478
Image ClassificationSTL-10 (test)
Accuracy96.3
357
Image ClassificationFood101--
309
Image ClassificationStanford Cars (test)
Accuracy75.5
306
Image ClassificationStanfordCars--
266
Image ClassificationFGVC-Aircraft (test)
Accuracy36.6
231
Image ClassificationFGVCAircraft--
225
Image ClassificationDTD (test)
Accuracy70.1
181
Image ClassificationSUN397
Accuracy (Base)78.67
131
Showing 10 of 44 rows

Other info

Follow for update