Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement

About

The popularity of Contrastive Language-Image Pre-training (CLIP) has propelled its application to diverse downstream vision tasks. To improve its capacity on downstream tasks, few-shot learning has become a widely-adopted technique. However, existing methods either exhibit limited performance or suffer from excessive learnable parameters. In this paper, we propose APE, an Adaptive Prior rEfinement method for CLIP's pre-trained knowledge, which achieves superior accuracy with high computational efficiency. Via a prior refinement module, we analyze the inter-class disparity in the downstream data and decouple the domain-specific knowledge from the CLIP-extracted cache model. On top of that, we introduce two model variants, a training-free APE and a training-required APE-T. We explore the trilateral affinities between the test image, prior cache model, and textual representations, and only enable a lightweight category-residual module to be trained. For the average accuracy over 11 benchmarks, both APE and APE-T attain state-of-the-art and respectively outperform the second-best by +1.59% and +1.99% under 16 shots with x30 less learnable parameters.

Xiangyang Zhu, Renrui Zhang, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, Peng Gao• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy68.74
798
Image ClassificationImageNet V2 (test)
Top-1 Accuracy59.58
181
Image ClassificationOxford-IIIT Pet
Accuracy93.46
161
Image ClassificationImageNet-Sketch (test)
Top-1 Acc0.4328
132
Image ClassificationAverage 11 datasets--
52
Image ClassificationImageNet V2 (Target)
Accuracy55.94
42
Image ClassificationImageNet-Sketch (Target)
Accuracy36.61
30
Image ClassificationImageNet (source)
Accuracy63.42
23
Image ClassificationImageNet-1k (val)
Accuracy74.3
20
ClassificationImageNet 16-shot
Accuracy63.38
5
Showing 10 of 10 rows

Other info

Follow for update