Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty-Aware Pseudo-Label Filtering for Source-Free Unsupervised Domain Adaptation

About

Source-free unsupervised domain adaptation (SFUDA) aims to enable the utilization of a pre-trained source model in an unlabeled target domain without access to source data. Self-training is a way to solve SFUDA, where confident target samples are iteratively selected as pseudo-labeled samples to guide target model learning. However, prior heuristic noisy pseudo-label filtering methods all involve introducing extra models, which are sensitive to model assumptions and may introduce additional errors or mislabeling. In this work, we propose a method called Uncertainty-aware Pseudo-label-filtering Adaptation (UPA) to efficiently address this issue in a coarse-to-fine manner. Specially, we first introduce a sample selection module named Adaptive Pseudo-label Selection (APS), which is responsible for filtering noisy pseudo labels. The APS utilizes a simple sample uncertainty estimation method by aggregating knowledge from neighboring samples and confident samples are selected as clean pseudo-labeled. Additionally, we incorporate Class-Aware Contrastive Learning (CACL) to mitigate the memorization of pseudo-label noise by learning robust pair-wise representation supervised by pseudo labels. Through extensive experiments conducted on three widely used benchmarks, we demonstrate that our proposed method achieves competitive performance on par with state-of-the-art SFUDA methods. Code is available at https://github.com/chenxi52/UPA.

Xi Chen, Haosen Yang, Huicong Zhang, Hongxun Yao, Xiatian Zhu• 2024

Related benchmarks

TaskDatasetResultRank
Domain AdaptationOffice-Home (test)
Mean Accuracy73
112
Domain AdaptationOFFICE
Average Accuracy89.9
96
Image ClassificationVisDA-C (test)
Mean Accuracy88.7
76
Image ClassificationDomainNet-126
Accuracy (R->C)69.5
46
Domain AdaptationVisDA-C (test)
S→R Score0.887
26
Domain AdaptationDomainNet-126
Accuracy (S->P)66.8
26
ClassificationWiSig 1-1 → 1-19 (test)
Accuracy81.21
11
ClassificationWiSig 2-1 → 18-2 (test)
Accuracy38.6
11
ClassificationWiSig 14-7 → 3-19 (test)
Accuracy53.12
11
ClassificationWiSig 7-7 → 8-8 (test)
Accuracy73.54
11
Showing 10 of 24 rows

Other info

Follow for update