Learning to Generalize: Meta-Learning for Domain Generalization
About
Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. {Domain Generalization} (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel {meta-learning} method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | Cityscapes (test) | mIoU54.76 | 1145 | |
| Image Classification | PACS (test) | Average Accuracy82.9 | 254 | |
| Domain Generalization | VLCS | Accuracy82.9 | 238 | |
| Image Classification | PACS | Overall Average Accuracy70 | 230 | |
| Domain Generalization | PACS (test) | Average Accuracy84.8 | 225 | |
| Domain Generalization | PACS | Accuracy (Art)87.1 | 221 | |
| Domain Generalization | OfficeHome | Accuracy68.2 | 182 | |
| Person Re-Identification | VIPeR | Rank-123.5 | 182 | |
| Image Classification | DomainNet | Accuracy (ClipArt)59.1 | 161 | |
| Domain Generalization | PACS (leave-one-domain-out) | Art Accuracy85.5 | 146 |