f-Domain-Adversarial Learning: Theory and Algorithms
About
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unsupervised Domain Adaptation | Office-Home (test) | Average Accuracy70 | 332 | |
| Unsupervised Domain Adaptation | Office-Home | Average Accuracy70 | 238 | |
| Domain Adaptation | Office-31 | Accuracy (A -> W)95.4 | 156 | |
| Domain Adaptation | Office-Home | Average Accuracy68.5 | 111 | |
| Unsupervised Domain Adaptation | Office-31 | A->W Accuracy93.4 | 83 | |
| Unsupervised Domain Adaptation | Digits MNIST and USPS (test) | Accuracy (M->U)95.3 | 5 |