Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Split to Merge: Unifying Separated Modalities for Unsupervised Domain Adaptation

About

Large vision-language models (VLMs) like CLIP have demonstrated good zero-shot learning performance in the unsupervised domain adaptation task. Yet, most transfer approaches for VLMs focus on either the language or visual branches, overlooking the nuanced interplay between both modalities. In this work, we introduce a Unified Modality Separation (UniMoS) framework for unsupervised domain adaptation. Leveraging insights from modality gap studies, we craft a nimble modality separation network that distinctly disentangles CLIP's features into language-associated and vision-associated components. Our proposed Modality-Ensemble Training (MET) method fosters the exchange of modality-agnostic information while maintaining modality-specific nuances. We align features across domains using a modality discriminator. Comprehensive evaluations on three benchmarks reveal our approach sets a new state-of-the-art with minimal computational costs. Code: https://github.com/TL-UESTC/UniMoS

Xinyao Li, Yuke Li, Zhekai Du, Fengling Li, Ke Lu, Jingjing Li• 2024

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationOffice-Home (test)
Average Accuracy90.7
332
Unsupervised Domain AdaptationOffice-Home
Average Accuracy77.9
238
Unsupervised Domain AdaptationDomainNet
Average Accuracy85.8
100
Unsupervised Domain AdaptationVisDA unsupervised domain adaptation 2017
Mean Accuracy88.1
87
Unsupervised Domain AdaptationDomainNet mini (test)
Average Accuracy87.3
23
Showing 5 of 5 rows

Other info

Code

Follow for update