Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

f-Domain-Adversarial Learning: Theory and Algorithms

About

Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.

David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler• 2021

Related benchmarks

TaskDatasetResultRank
Unsupervised Domain AdaptationOffice-Home (test)
Average Accuracy70
332
Unsupervised Domain AdaptationOffice-Home
Average Accuracy70
238
Domain AdaptationOffice-31
Accuracy (A -> W)95.4
156
Domain AdaptationOffice-Home
Average Accuracy68.5
111
Unsupervised Domain AdaptationOffice-31
A->W Accuracy93.4
83
Unsupervised Domain AdaptationDigits MNIST and USPS (test)
Accuracy (M->U)95.3
5
Showing 6 of 6 rows

Other info

Follow for update