Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On $f$-Divergence Principled Domain Adaptation: An Improved Framework

About

Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed in Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD obtains novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Using a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based learning algorithms over previous works in popular UDA benchmarks.

Ziqiao Wang, Yongyi Mao• 2024

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationOffice-Home
Average Accuracy70.2
238
Domain AdaptationOffice-31
Accuracy (A -> W)95.3
156
Domain AdaptationOffice-Home
Average Accuracy70.2
111
Unsupervised Domain AdaptationOffice-31
A->W Accuracy98.7
83
Unsupervised Domain AdaptationDigits MNIST and USPS (test)
Accuracy (M->U)95.9
5
Showing 5 of 5 rows

Other info

Code

Follow for update