Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Words: Augmenting Discriminative Richness via Diffusions in Unsupervised Prompt Learning

About

Fine-tuning vision-language models (VLMs) with large amounts of unlabeled data has recently garnered significant interest. However, a key challenge remains the lack of high-quality pseudo-labeled data. Current pseudo-labeling strategies often struggle with mismatches between semantic and visual information, leading to sub-optimal performance of unsupervised prompt learning (UPL) methods. In this paper, we introduce a simple yet effective approach called \textbf{A}ugmenting D\textbf{i}scriminative \textbf{R}ichness via Diffusions (AiR), toward learning a richer discriminating way to represent the class comprehensively and thus facilitate classification. Specifically, our approach includes a pseudo-label generation module that leverages high-fidelity synthetic samples to create an auxiliary classifier, which captures richer visual variation, bridging text-image-pair classification to a more robust image-image-pair classification. Additionally, we exploit the diversity of diffusion-based synthetic samples to enhance prompt learning, providing greater information for semantic-visual alignment. Extensive experiments on five public benchmarks, including RESISC45 and Flowers102, and across three learning paradigms-UL, SSL, and TRZSL-demonstrate that AiR achieves substantial and consistent performance improvements over state-of-the-art unsupervised prompt learning methods.

Hairui Ren, Fan Tang, He Zhao, Zixuan Wang, Dandan Guo, Yi Chang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationFGVC-Aircraft (test)--
231
Image ClassificationDTD (test)
Accuracy69.9
181
Image ClassificationFlowers-102 (test)
Top-1 Accuracy92.3
124
Image ClassificationEuroSAT (test)--
59
Image ClassificationResisc45 (test)
Top-1 Accuracy87.8
34
Showing 5 of 5 rows

Other info

Follow for update