Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Invariant and Transportable Representations for Anti-Causal Domain Shifts

About

Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is common between the domains and what varies. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are "anti-causal" in the sense that $Y$ is a cause of the covariates $X$ -- in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the "anti-causal" structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle "invariant" and "non-stable" features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm. Code is available at https://github.com/ybjiaang/ACTIR.

Yibo Jiang, Victor Veitch• 2022

Related benchmarks

TaskDatasetResultRank
Domain GeneralizationPACS (test)--
225
Image ClassificationCMNIST (test)
Test Accuracy69.7
55
ClassificationCAMELYON17 (test)
Average Accuracy90.2
13
Domain AdaptationCMNIST v1 (test)
Accuracy (Shift 1.0)70.6
9
Domain GeneralizationAC Synthetic (test)
Accuracy74.9
7
Domain GeneralizationCE-DD Synthetic (test)
Accuracy69.6
7
Image ClassificationColorMNIST (test)
Accuracy69.7
6
Showing 7 of 7 rows

Other info

Follow for update