Invariant Risk Minimization
About
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.
Martin Arjovsky, L\'eon Bottou, Ishaan Gulrajani, David Lopez-Paz• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | PACS (test) | Average Accuracy81.5 | 254 | |
| Domain Generalization | VLCS | Accuracy78.6 | 238 | |
| Image Classification | PACS | Overall Average Accuracy65.2 | 230 | |
| Domain Generalization | PACS (test) | Average Accuracy77.1 | 225 | |
| Domain Generalization | PACS | Accuracy (Art)85.7 | 221 | |
| Graph Classification | Mutag (test) | Accuracy91 | 217 | |
| Domain Generalization | OfficeHome | Accuracy64.3 | 182 | |
| Image Classification | DomainNet | Accuracy (ClipArt)48.5 | 161 | |
| Domain Generalization | PACS (leave-one-domain-out) | Art Accuracy84.8 | 146 | |
| Multi-class classification | VLCS | Acc (Caltech)98.6 | 139 |
Showing 10 of 349 rows
...