Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

About

Among the most critical limitations of deep learning NLP models are their lack of interpretability, and their reliance on spurious correlations. Prior work proposed various approaches to interpreting the black-box models to unveil the spurious correlations, but the research was primarily used in human-computer interaction scenarios. It still remains underexplored whether or how such model interpretations can be used to automatically "unlearn" confounding features. In this work, we propose influence tuning--a procedure that leverages model interpretations to update the model parameters towards a plausible interpretation (rather than an interpretation that relies on spurious patterns in the data) in addition to learning to predict the task labels. We show that in a controlled setup, influence tuning can help deconfounding the model from spurious patterns in data, significantly outperforming baseline methods that use adversarial training.

Xiaochuang Han, Yulia Tsvetkov• 2021

Related benchmarks

TaskDatasetResultRank
Influence EstimationWinoBias (test)
Spearman Correlation0.554
14
Influence EstimationTruthfulQA (test)
Spearman Correlation0.446
14
Influence EstimationToxiGen (test)
Spearman Correlation0.104
14
Showing 3 of 3 rows

Other info

Follow for update