Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Learn Single Domain Generalization

About

We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.

Fengchun Qiao, Long Zhao, Xi Peng• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationPACS (test)
Average Accuracy82.6
254
Domain GeneralizationVLCS
Accuracy50.46
238
Image ClassificationPACS
Overall Average Accuracy51
230
Domain GeneralizationPACS
Accuracy (Art)64.29
221
Multi-class classificationVLCS
Acc (Caltech)97.5
139
Image ClassificationCIFAR-10-C
Accuracy64.65
127
Image ClassificationPACS
Accuracy59.4
100
Image ClassificationVLCS
Accuracy69.58
76
Domain GeneralizationOffice-Home
Average Accuracy53.29
63
Image ClassificationCIFAR-10-C (test)--
61
Showing 10 of 26 rows

Other info

Follow for update