Learning to Generate Novel Domains for Domain Generalization
About
This paper focuses on domain generalization (DG), the task of learning from multiple source domains a model that generalizes well to unseen domains. A main challenge for DG is that the available source domains often exhibit limited diversity, hampering the model's ability to learn to generalize. We therefore employ a data generator to synthesize data from pseudo-novel domains to augment the source domains. This explicitly increases the diversity of available training domains and leads to a more generalizable model. To train the generator, we model the distribution divergence between source and synthesized pseudo-novel domains using optimal transport, and maximize the divergence. To ensure that semantics are preserved in the synthesized data, we further impose cycle-consistency and classification losses on the generator. Our method, L2A-OT (Learning to Augment by Optimal Transport) outperforms current state-of-the-art DG methods on four benchmark datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | PACS (test) | Average Accuracy84.9 | 254 | |
| Image Classification | PACS | Overall Average Accuracy82.8 | 230 | |
| Domain Generalization | PACS (test) | Average Accuracy84.9 | 225 | |
| Domain Generalization | PACS | Accuracy (Art)83.3 | 221 | |
| Image Classification | Office-Home (test) | Mean Accuracy65.6 | 199 | |
| Person Re-Identification | Market-1501 to DukeMTMC-reID (test) | Rank-150.1 | 172 | |
| Domain Generalization | PACS (leave-one-domain-out) | Art Accuracy83.3 | 146 | |
| Image Classification | OfficeHome | Average Accuracy65.6 | 131 | |
| object recognition | PACS (leave-one-domain-out) | Acc (Art painting)83.3 | 112 | |
| Domain Generalization | Office-Home (test) | Average Accuracy67.66 | 106 |