Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RMAdapter: Reconstruction-based Multi-Modal Adapter for Vision-Language Models

About

Pre-trained Vision-Language Models (VLMs), \textit{e.g.} CLIP, have become essential tools in multimodal transfer learning. However, fine-tuning VLMs in few-shot scenarios poses significant challenges in balancing task-specific adaptation and generalization in the obtained model. Meanwhile, current researches have predominantly focused on prompt-based adaptation methods, leaving adapter-based approaches underexplored and revealing notable performance gaps. To address these challenges, we introduce a novel Reconstruction-based Multimodal Adapter (RMAdapter), which leverages a dual-branch architecture. Unlike conventional single-branch adapters, RMAdapter consists of: (1) an adaptation branch that injects task-specific knowledge through parameter-efficient fine-tuning, and (2) a reconstruction branch that preserves general knowledge by reconstructing latent space features back into the original feature space. This design facilitates a dynamic balance between general and task-specific knowledge. Importantly, although RMAdapter introduces an additional reconstruction branch, it is carefully optimized to remain lightweight. By computing reconstruction loss locally at each layer and sharing projection modules, the overall computational overhead is kept minimal. A consistency constraint is also incorporated to better regulate the trade-off between discriminability and generalization. We comprehensively evaluate the effectiveness of RMAdapter on three representative tasks: generalization to new categories, generalization to new target datasets, and domain generalization. Without relying on data augmentation or duplicate prompt designs, our RMAdapter consistently outperforms state-of-the-art approaches across all evaluation metrics.

Xiang Lin, Weixin Li, Shu Guo, Lihong Wang, Di Huang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationFGVC-Aircraft (test)--
231
Image ClassificationFGVCAircraft--
225
Image ClassificationImageNet V2 (test)--
181
Image ClassificationImageNet-A (test)--
154
Image ClassificationImageNet-Sketch (test)--
132
Image ClassificationSUN397
Accuracy (Base)82.87
131
Image ClassificationCaltech101
Base Accuracy98.4
106
Image ClassificationImageNet-R (test)
Accuracy77.7
105
Image ClassificationEuroSAT Base-to-New
Base Score92.17
65
Image ClassificationCaltech101 Base and New Classes
Base Accuracy98.4
50
Showing 10 of 20 rows

Other info

Follow for update