Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models

About

Large-scale pre-trained Vision-Language Models (VLMs) have significantly advanced transfer learning across diverse tasks. However, adapting these models with limited few-shot data often leads to overfitting, undermining their ability to generalize to new tasks. To address this, we propose Multi-Modal Representation Learning (MMRL), which introduces a shared, learnable, modality-agnostic representation space. MMRL generates space tokens projected into both text and image encoders as representation tokens, enabling more effective cross-modal interactions. Unlike prior methods that mainly optimize class token features, MMRL inserts representation tokens into higher encoder layers--where task-specific features are more prominent--while preserving general knowledge in the lower layers. During training, both class and representation features are jointly optimized: a trainable projection layer is applied to representation tokens for task adaptation, while the projection layer for class token remains frozen to retain pre-trained knowledge. To further promote generalization, we introduce a regularization term aligning class and text features with the frozen VLM's zero-shot features. At inference, a decoupling strategy uses both class and representation features for base tasks, but only class features for novel tasks due to their stronger generalization. Building upon this, we propose MMRL++, a parameter-efficient and interaction-aware extension that significantly reduces trainable parameters and enhances intra-modal interactions--particularly across the layers of representation tokens--allowing gradient sharing and instance-specific information to propagate more effectively through the network. Extensive experiments on 15 datasets demonstrate that MMRL and MMRL++ consistently outperform state-of-the-art methods, achieving a strong balance between task-specific adaptation and generalization.

Yuncheng Guo, Xiaodong Gu• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc51.2
654
Image ClassificationEuroSAT
Accuracy93.5
569
Image ClassificationFlowers102
Accuracy66.6
558
Image ClassificationDTD
Accuracy75.3
485
Image ClassificationFood101
Accuracy87.03
457
Image ClassificationSUN397
Accuracy77.7
441
Action RecognitionUCF101
Accuracy87.6
431
Image ClassificationStanfordCars
Accuracy91.43
312
Image ClassificationFGVCAircraft
Accuracy86.73
261
Image ClassificationCaltech101
Accuracy67.49
228
Showing 10 of 30 rows

Other info

Follow for update