Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Generalize: Meta-Learning for Domain Generalization

About

Domain shift refers to the well known problem that a model trained in one source domain performs poorly when applied to a target domain with different statistics. {Domain Generalization} (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel {meta-learning} method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.

Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M. Hospedales• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes (test)
mIoU54.76
1145
Image ClassificationPACS (test)
Average Accuracy82.9
254
Domain GeneralizationVLCS
Accuracy82.9
238
Image ClassificationPACS
Overall Average Accuracy70
230
Domain GeneralizationPACS (test)
Average Accuracy84.8
225
Domain GeneralizationPACS
Accuracy (Art)87.1
221
Domain GeneralizationOfficeHome
Accuracy68.2
182
Person Re-IdentificationVIPeR
Rank-123.5
182
Image ClassificationDomainNet
Accuracy (ClipArt)59.1
161
Domain GeneralizationPACS (leave-one-domain-out)
Art Accuracy85.5
146
Showing 10 of 120 rows
...

Other info

Follow for update