Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

In Search of Lost Domain Generalization

About

The goal of domain generalization algorithms is to predict well on distributions different from those seen during training. While a myriad of domain generalization algorithms exist, inconsistencies in experimental conditions -- datasets, architectures, and model selection criteria -- render fair and realistic comparisons difficult. In this paper, we are interested in understanding how useful domain generalization algorithms are in realistic settings. As a first step, we realize that model selection is non-trivial for domain generalization tasks. Contrary to prior work, we argue that domain generalization algorithms without a model selection strategy should be regarded as incomplete. Next, we implement DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria. We conduct extensive experiments using DomainBed and find that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets. Looking forward, we hope that the release of DomainBed, along with contributions from fellow researchers, will streamline reproducible and rigorous research in domain generalization.

Ishaan Gulrajani, David Lopez-Paz• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationPACS (test)
Average Accuracy85.5
254
Domain GeneralizationVLCS
Accuracy80.96
238
Domain GeneralizationPACS (test)
Average Accuracy91.9
225
Domain GeneralizationPACS
Accuracy (Art)88.1
221
Image ClassificationOffice-Home (test)--
199
Domain GeneralizationOfficeHome
Accuracy81.23
182
Domain GeneralizationDomainBed
Average Accuracy76.16
127
Domain GeneralizationDomainNet
Accuracy44
113
Domain GeneralizationDomainBed (test)
VLCS Accuracy82.7
110
Domain GeneralizationOffice-Home (test)
Average Accuracy78.4
106
Showing 10 of 79 rows
...

Other info

Code

Follow for update