Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Modal Adapter for Vision-Language Models

About

Large pre-trained vision-language models, such as CLIP, have demonstrated state-of-the-art performance across a wide range of image classification tasks, without requiring retraining. Few-shot CLIP is competitive with existing specialized architectures that were trained on the downstream tasks. Recent research demonstrates that the performance of CLIP can be further improved using lightweight adaptation approaches. However, previous methods adapt different modalities of the CLIP model individually, ignoring the interactions and relationships between visual and textual representations. In this work, we propose Multi-Modal Adapter, an approach for Multi-Modal adaptation of CLIP. Specifically, we add a trainable Multi-Head Attention layer that combines text and image features to produce an additive adaptation of both. Multi-Modal Adapter demonstrates improved generalizability, based on its performance on unseen classes compared to existing adaptation methods. We perform additional ablations and investigations to validate and interpret the proposed approach.

Dominykas Seputis, Serghei Mihailov, Soham Chatterjee, Zehao Xiao• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc51.12
654
Image ClassificationImageNet V2--
611
Image ClassificationEuroSAT
Accuracy92.37
569
Image ClassificationFlowers102
Accuracy72.07
558
Image ClassificationDTD
Accuracy73.47
485
Image ClassificationFood101
Accuracy86.12
457
Image ClassificationUCF101
Top-1 Acc86.3
455
Image ClassificationSUN397
Accuracy74.63
441
Action RecognitionUCF101
Accuracy86.3
431
Image ClassificationImageNet--
431
Showing 10 of 133 rows
...

Other info

Follow for update