Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Environment Inference for Invariant Learning

About

Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness. A promising formulation is domain-invariant learning, which identifies the key issue of learning which features are domain-specific versus domain-invariant. An important assumption in this area is that the training examples are partitioned into "domains" or "environments". Our focus is on the more common setting where such partitions are not provided. We propose EIIL, a general framework for domain-invariant learning that incorporates Environment Inference to directly infer partitions that are maximally informative for downstream Invariant Learning. We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels, and significantly outperforms ERM on worst-group performance in the Waterbirds and CivilComments datasets. Finally, we establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.

Elliot Creager, J\"orn-Henrik Jacobsen, Richard Zemel• 2020

Related benchmarks

TaskDatasetResultRank
Sentiment AnalysisSST-2 (test)
Accuracy66.39
136
Image ClassificationWaterbirds (test)--
92
Graph ClassificationTwitter
Accuracy62.76
57
Attribute ClassificationCelebA (test)
Worst-group Accuracy83.3
48
Image ClassificationPC-MNIST (val)
Accuracy90.18
38
Image ClassificationPC-MNIST (test)
Accuracy63.85
38
Graph ClassificationDrugOOD Ki-Sca (Scaffold-based OOD shift)
ROC-AUC69.63
36
Graph ClassificationDrugOOD EC50 (Scaffold-based OOD shift)
ROC AUC62.88
36
ClassificationMNLI (val)
Accuracy83.39
32
ClassificationHANS (test)
Accuracy63.9
32
Showing 10 of 107 rows
...

Other info

Follow for update