Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiMPLe -- Disentangled Multi-Modal Prompt Learning: Enhancing Out-Of-Distribution Alignment with Invariant and Spurious Feature Separation

About

We introduce DiMPLe (Disentangled Multi-Modal Prompt Learning), a novel approach to disentangle invariant and spurious features across vision and language modalities in multi-modal learning. Spurious correlations in visual data often hinder out-of-distribution (OOD) performance. Unlike prior methods focusing solely on image features, DiMPLe disentangles features within and across modalities while maintaining consistent alignment, enabling better generalization to novel classes and robustness to distribution shifts. Our method combines three key objectives: (1) mutual information minimization between invariant and spurious features, (2) spurious feature regularization, and (3) contrastive learning on invariant features. Extensive experiments demonstrate DiMPLe demonstrates superior performance compared to CoOp-OOD, when averaged across 11 diverse datasets, and achieves absolute gains of 15.27 in base class accuracy and 44.31 in novel class accuracy.

Umaima Rahman, Mohammad Yaqub, Dwarikanath Mahapatra• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationFlowers102--
478
Image ClassificationFood101--
309
Image ClassificationStanfordCars--
266
Image ClassificationSUN397
Accuracy (Base)75.43
131
Image ClassificationOxfordPets
Base Accuracy91.57
117
Image ClassificationCaltech101
Base Accuracy97.43
106
Image ClassificationDTD
Base Score69.8
79
Image ClassificationImageNet
Base Score71.87
79
Action RecognitionUCF101
Base Accuracy78.1
62
Image ClassificationImageNet Domain Generalization (Source: ImageNet, Targets: ImageNetV2, ImageNet-Sketch, ImageNet-A, ImageNet-R) (test)
Accuracy (ImageNetV2)61.2
53
Showing 10 of 13 rows

Other info

Follow for update