Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When is invariance useful in an Out-of-Distribution Generalization problem ?

About

The goal of Out-of-Distribution (OOD) generalization problem is to train a predictor that generalizes on all environments. Popular approaches in this field use the hypothesis that such a predictor shall be an \textit{invariant predictor} that captures the mechanism that remains constant across environments. While these approaches have been experimentally successful in various case studies, there is still much room for the theoretical validation of this hypothesis. This paper presents a new set of theoretical conditions necessary for an invariant predictor to achieve the OOD optimality. Our theory not only applies to non-linear cases, but also generalizes the necessary condition used in \citet{rojas2018invariant}. We also derive Inter Gradient Alignment algorithm from our theory and demonstrate its competitiveness on MNIST-derived benchmark datasets as well as on two of the three \textit{Invariance Unit Tests} proposed by \citet{aubinlinear}.

Masanori Koyama, Shoichiro Yamaguchi• 2020

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora Covariate shift (degree split)
OOD Accuracy62.62
50
RegressionPovertyMap (test)
Worst-U/R Pearson Correlation0.43
43
RegressionACSIncome (test)
RMSE0.454
34
Node ClassificationArxiv Covariate shift (degree split)
OOD Accuracy65.87
30
Node ClassificationWebKB university split Concept shift
OOD Test Accuracy28.44
30
Node ClassificationWebKB university split Covariate shift
OOD Test Accuracy29.89
30
Node ClassificationArxiv Covariate shift time split
OOD Test Accuracy65.93
20
Node ClassificationCora word split Covariate shift
OOD Test Accuracy65.07
15
Node ClassificationCora word split Concept shift
OOD Accuracy64.56
15
Node ClassificationCBAS color split Covariate shift
OOD Accuracy65.71
15
Showing 10 of 14 rows

Other info

Follow for update