Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AmPLe: Supporting Vision-Language Models via Adaptive-Debiased Ensemble Multi-Prompt Learning

About

Multi-prompt learning methods have emerged as an effective approach for facilitating the rapid adaptation of vision-language models to downstream tasks with limited resources. Existing multi-prompt learning methods primarily focus on utilizing various meticulously designed prompts within a single foundation vision-language model to achieve superior performance. However, the overlooked model-prompt matching bias hinders the development of multi-prompt learning, i.e., the same prompt can convey different semantics across distinct vision-language models, such as CLIP-ViT-B/16 and CLIP-ViT-B/32, resulting in inconsistent predictions of identical prompt. To mitigate the impact of this bias on downstream tasks, we explore an ensemble learning approach to sufficiently aggregate the benefits of diverse predictions. Additionally, we further disclose the presence of sample-prompt matching bias, which originates from the prompt-irrelevant semantics encapsulated in the input samples. Thus, directly utilizing all information from the input samples for generating weights of ensemble learning can lead to suboptimal performance. In response, we extract prompt-relevant semantics from input samples by leveraging the guidance of the information theory-based analysis, adaptively calculating debiased ensemble weights. Overall, we propose Adaptive-Debiased Ensemble MultiPrompt Learning, abbreviated as AmPLe, to mitigate the two types of bias simultaneously. Extensive experiments on three representative tasks, i.e., generalization to novel classes, new target datasets, and unseen domain shifts, show that AmPLe can widely outperform existing methods. Theoretical validation from a causal perspective further supports the effectiveness of AmPLe.

Fei Song, Yi Li, Jiangmeng Li, Rui Wang, Changwen Zheng, Fanjiang Xu, Hui Xiong• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationFlowers102
Accuracy74.13
478
Image ClassificationDTD
Accuracy51.13
419
Image ClassificationUCF101
Top-1 Acc71.57
404
Image ClassificationFood101
Accuracy86.7
309
Image ClassificationAircraft
Accuracy26.93
302
Image ClassificationStanfordCars
Accuracy69.83
266
Image ClassificationSUN397
Accuracy69.97
246
Image ClassificationCaltech101
Accuracy95.17
162
Image ClassificationSUN397
Accuracy (Base)83.97
131
Image ClassificationOxfordPets
Base Accuracy95.73
117
Showing 10 of 21 rows

Other info

Follow for update