Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations

About

Neural network classifiers can largely rely on simple spurious features, such as backgrounds, to make predictions. However, even in these cases, we show that they still often learn core features associated with the desired attributes of the data, contrary to recent findings. Inspired by this insight, we demonstrate that simple last layer retraining can match or outperform state-of-the-art approaches on spurious correlation benchmarks, but with profoundly lower complexity and computational expenses. Moreover, we show that last layer retraining on large ImageNet-trained models can also significantly reduce reliance on background and texture information, improving robustness to covariate shift, after only minutes of training on a single GPU.

Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson• 2022

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST2 (test)
Accuracy91.5
233
Image ClassificationWaterbirds
Average Accuracy96.1
157
Image ClassificationWaterbirds (test)
Worst-Group Accuracy92.9
112
ClassificationCelebA (test)--
92
Natural Language InferenceMultiNLI (test)--
81
Fine grained classificationStanford Cars
Accuracy82.74
50
ClassificationCivilComments (test)
Worst-case Accuracy81.8
47
Image ClassificationCelebA
WG Score89.6
42
Group RobustnessCivilComments-WILDS (test)
WG Accuracy48.2
40
Image ClassificationMetaShift
Average Accuracy77.5
33
Showing 10 of 65 rows

Other info

Follow for update