Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AD-CLIP: Adapting Domains in Prompt Space Using CLIP

About

Although deep learning models have shown impressive performance on supervised learning tasks, they often struggle to generalize well when the training (source) and test (target) domains differ. Unsupervised domain adaptation (DA) has emerged as a popular solution to this problem. However, current DA techniques rely on visual backbones, which may lack semantic richness. Despite the potential of large-scale vision-language foundation models like CLIP, their effectiveness for DA has yet to be fully explored. To address this gap, we introduce \textsc{AD-CLIP}, a domain-agnostic prompt learning strategy for CLIP that aims to solve the DA problem in the prompt space. We leverage the frozen vision backbone of CLIP to extract both image style (domain) and content information, which we apply to learn prompt tokens. Our prompts are designed to be domain-invariant and class-generalizable, by conditioning prompt learning on image style and content features simultaneously. We use standard supervised contrastive learning in the source domain, while proposing an entropy minimization strategy to align domains in the embedding space given the target domain data. We also consider a scenario where only target domain samples are available during testing, without any source domain data, and propose a cross-domain style mapping network to hallucinate domain-agnostic tokens. Our extensive experiments on three benchmark DA datasets demonstrate the effectiveness of \textsc{AD-CLIP} compared to existing literature. Code is available at \url{https://github.com/mainaksingha01/AD-CLIP}

Mainak Singha, Harsh Pal, Ankit Jha, Biplab Banerjee• 2023

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationOffice-Home (test)
Average Accuracy90.5
332
Unsupervised Domain AdaptationOffice-Home
Average Accuracy86.1
238
Image ClassificationOffice-Home
Average Accuracy86.1
142
Unsupervised Domain AdaptationVisDA unsupervised domain adaptation 2017
Mean Accuracy87.7
87
Image ClassificationDomainNet-126
Accuracy (R->C)73.6
46
Open Set Domain AdaptationOffice-Home--
45
Image ClassificationVisDA (val)
Plane Accuracy98.1
44
Open-Set Multi-Target Domain AdaptationDomainNet Mini 60/66 (test)
OS*92.98
40
Open-Set Multi-Target Domain AdaptationOffice-Home 15/50 (test)
OS*94.02
40
Closed-set Source-Free Domain AdaptationVisDA Sy→Re
Accuracy (Sy→Re)87.7
37
Showing 10 of 23 rows

Other info

Follow for update