Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-Modal Prototype Alignment and Mixing for Training-Free Few-Shot Classification

About

Vision-language models (VLMs) like CLIP are trained with the objective of aligning text and image pairs. To improve CLIP-based few-shot image classification, recent works have observed that, along with text embeddings, image embeddings from the training set are an important source of information. In this work we investigate the impact of directly mixing image and text prototypes for few-shot classification and analyze this from a bias-variance perspective. We show that mixing prototypes acts like a shrinkage estimator. Although mixed prototypes improve classification performance, the image prototypes still add some noise in the form of instance-specific background or context information. In order to capture only information from the image space relevant to the given classification task, we propose projecting image prototypes onto the principal directions of the semantic text embedding space to obtain a text-aligned semantic image subspace. These text-aligned image prototypes, when mixed with text embeddings, further improve classification. However, for downstream datasets with poor cross-modal alignment in CLIP, semantic alignment might be suboptimal. We show that the image subspace can still be leveraged by modeling the anisotropy using class covariances. We demonstrate that combining a text-aligned mixed prototype classifier and an image-specific LDA classifier outperforms existing methods across few-shot classification benchmarks.

Dipam Goswami, Simone Magistri, Gido M. van de Ven, Bart{\l}omiej Twardowski, Andrew D. Bagdanov, Tinne Tuytelaars, Joost van de Weijer• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationStanford Cars
Accuracy75.8
635
Image ClassificationEuroSAT
Accuracy86
569
Image ClassificationFlowers102
Accuracy96.1
558
Image ClassificationFood101
Accuracy79.1
457
Image ClassificationSUN397
Accuracy70.9
441
Image ClassificationOxford-IIIT Pets
Accuracy90.3
306
Image ClassificationCaltech101
Accuracy92.7
228
Image ClassificationFGVC Aircraft--
203
Few-shot Image Classification11 datasets average CLIP-based (ImageNet, Caltech101, OxfordPets, StanfordCars, Flowers102, Food101, FGVCAircraft, SUN397, DTD, EuroSAT, UCF101)
Average Accuracy83.8
69
Image ClassificationDTD (Describable Textures Dataset)
Accuracy67
57
Showing 10 of 11 rows

Other info

Follow for update