Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Feature Fusion Transferability Aware Transformer for Unsupervised Domain Adaptation

About

Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from labeled source domains to improve performance on the unlabeled target domains. While Convolutional Neural Networks (CNNs) have been dominant in previous UDA methods, recent research has shown promise in applying Vision Transformers (ViTs) to this task. In this study, we propose a novel Feature Fusion Transferability Aware Transformer (FFTAT) to enhance ViT performance in UDA tasks. Our method introduces two key innovations: First, we introduce a patch discriminator to evaluate the transferability of patches, generating a transferability matrix. We integrate this matrix into self-attention, directing the model to focus on transferable patches. Second, we propose a feature fusion technique to fuse embeddings in the latent space, enabling each embedding to incorporate information from all others, thereby improving generalization. These two components work in synergy to enhance feature representation learning. Extensive experiments on widely used benchmarks demonstrate that our method significantly improves UDA performance, achieving state-of-the-art (SOTA) results.

Xiaowei Yu, Zhe Huang, Zao Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationOffice-Home (test)
Average Accuracy91.4
332
Unsupervised Domain AdaptationDomainNet (test)
Average Accuracy51.9
97
Unsupervised Domain AdaptationVisDA 2017 (test)
Plane Accuracy99.7
27
Unsupervised Domain AdaptationOffice-31 standard (test)
Accuracy (A->W)97.6
14
Showing 4 of 4 rows

Other info

Follow for update