Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Modal Adapter for Vision-Language Models

About

Large pre-trained vision-language models, such as CLIP, have demonstrated state-of-the-art performance across a wide range of image classification tasks, without requiring retraining. Few-shot CLIP is competitive with existing specialized architectures that were trained on the downstream tasks. Recent research demonstrates that the performance of CLIP can be further improved using lightweight adaptation approaches. However, previous methods adapt different modalities of the CLIP model individually, ignoring the interactions and relationships between visual and textual representations. In this work, we propose Multi-Modal Adapter, an approach for Multi-Modal adaptation of CLIP. Specifically, we add a trainable Multi-Head Attention layer that combines text and image features to produce an additive adaptation of both. Multi-Modal Adapter demonstrates improved generalizability, based on its performance on unseen classes compared to existing adaptation methods. We perform additional ablations and investigations to validate and interpret the proposed approach.

Dominykas Seputis, Serghei Mihailov, Soham Chatterjee, Zehao Xiao• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc51.12
553
Image ClassificationEuroSAT
Accuracy92.37
497
Image ClassificationImageNet V2--
487
Image ClassificationFlowers102
Accuracy72.07
478
Image ClassificationImageNet--
429
Image ClassificationSUN397
Accuracy74.63
425
Image ClassificationDTD
Accuracy73.47
419
Image ClassificationUCF101
Top-1 Acc86.3
404
Action RecognitionUCF101
Accuracy86.3
365
Image ClassificationFood101
Accuracy86.12
309
Showing 10 of 118 rows
...

Other info

Follow for update