Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aggregate-and-Adapt Natural Language Prompts for Downstream Generalization of CLIP

About

Large pretrained vision-language models like CLIP have shown promising generalization capability, but may struggle in specialized domains (e.g., satellite imagery) or fine-grained classification (e.g., car models) where the visual concepts are unseen or under-represented during pretraining. Prompt learning offers a parameter-efficient finetuning framework that can adapt CLIP to downstream tasks even when limited annotation data are available. In this paper, we improve prompt learning by distilling the textual knowledge from natural language prompts (either human- or LLM-generated) to provide rich priors for those under-represented concepts. We first obtain a prompt ``summary'' aligned to each input image via a learned prompt aggregator. Then we jointly train a prompt generator, optimized to produce a prompt embedding that stays close to the aggregated summary while minimizing task loss at the same time. We dub such prompt embedding as Aggregate-and-Adapted Prompt Embedding (AAPE). AAPE is shown to be able to generalize to different downstream data distributions and tasks, including vision-language understanding tasks (e.g., few-shot classification, VQA) and generation tasks (image captioning) where AAPE achieves competitive performance. We also show AAPE is particularly helpful to handle non-canonical and OOD examples. Furthermore, AAPE learning eliminates LLM-based inference cost as required by baselines, and scales better with data and LLM model size.

Chen Huang, Skyler Seto, Samira Abnar, David Grangier, Navdeep Jaitly, Josh Susskind• 2024

Related benchmarks

TaskDatasetResultRank
Image-to-Text RetrievalFlickr30K
R@194.2
379
Image ClassificationFood101--
309
Image ClassificationSUN397
Accuracy (Base)82.93
131
Image-to-Text RetrievalCOCO
R@176.7
123
Image ClassificationOxfordPets
Base Accuracy96.89
117
Image Classification11 datasets base-to-new average
Base Average Score84.72
81
Image ClassificationUCF101
Base Classes Acc87.69
62
Few-shot Image ClassificationStanfordCars
Accuracy0.7751
21
Few-shot classificationImageNet (source)
Accuracy73.56
14
Few-shot classificationImageNet V2 (Target)
Accuracy65.97
14
Showing 10 of 23 rows

Other info

Follow for update