Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spuriosity Didn't Kill the Classifier: Using Invariant Predictions to Harness Spurious Features

About

To avoid failures on out-of-distribution data, recent works have sought to extract features that have an invariant or stable relationship with the label across domains, discarding "spurious" or unstable features whose relationship with the label changes across domains. However, unstable features often carry complementary information that could boost performance if used correctly in the test domain. In this work, we show how this can be done without test-domain labels. In particular, we prove that pseudo-labels based on stable features provide sufficient guidance for doing so, provided that stable and unstable features are conditionally independent given the label. Based on this theoretical insight, we propose Stable Feature Boosting (SFB), an algorithm for: (i) learning a predictor that separates stable and conditionally-independent unstable features; and (ii) using the stable-feature predictions to adapt the unstable-feature predictions in the test domain. Theoretically, we prove that SFB can learn an asymptotically-optimal predictor without test-domain labels. Empirically, we demonstrate the effectiveness of SFB on real and synthetic data.

Cian Eastwood, Shashank Singh, Andrei Liviu Nicolicioiu, Marin Vlastelica, Julius von K\"ugelgen, Bernhard Sch\"olkopf• 2023

Related benchmarks

TaskDatasetResultRank
Domain GeneralizationPACS (test)--
225
Image ClassificationCMNIST (test)
Test Accuracy88.1
55
ClassificationCAMELYON17 (test)
Average Accuracy90.3
13
Domain AdaptationCMNIST v1 (test)
Accuracy (Shift 1.0)100
9
Domain GeneralizationAC Synthetic (test)
Accuracy89.2
7
Domain GeneralizationCE-DD Synthetic (test)
Accuracy88.6
7
Image ClassificationColorMNIST (test)
Accuracy88.1
6
Showing 7 of 7 rows

Other info

Code

Follow for update