Learning to Learn Single Domain Generalization
About
We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | PACS (test) | Average Accuracy82.6 | 254 | |
| Domain Generalization | VLCS | Accuracy50.46 | 238 | |
| Image Classification | PACS | Overall Average Accuracy51 | 230 | |
| Domain Generalization | PACS | Accuracy (Art)64.29 | 221 | |
| Multi-class classification | VLCS | Acc (Caltech)97.5 | 139 | |
| Image Classification | CIFAR-10-C | Accuracy64.65 | 127 | |
| Image Classification | PACS | Accuracy59.4 | 100 | |
| Image Classification | VLCS | Accuracy69.58 | 76 | |
| Domain Generalization | Office-Home | Average Accuracy53.29 | 63 | |
| Image Classification | CIFAR-10-C (test) | -- | 61 |