Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Explanations Be Useful for Calibrating Black Box Models?

About

NLP practitioners often want to take existing trained models and apply them to data from new domains. While fine-tuning or few-shot learning can be used to adapt a base model, there is no single recipe for making these techniques work; moreover, one may not have access to the original model weights if it is deployed as a black box. We study how to improve a black box model's performance on a new domain by leveraging explanations of the model's behavior. Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not. We experiment with our method on two tasks, extractive question answering and natural language inference, covering adaptation from several pairs of domains with limited target-domain data. The experimental results across all the domain pairs show that explanations are useful for calibrating these models, boosting accuracy when predictions do not have to be returned on every example. We further show that the calibration model transfers to some extent between tasks.

Xi Ye, Greg Durrett• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceQNLI
Accuracy75
42
Selective Question AnsweringSQ-ADV
Area under Coverage-F194.5
12
Selective Question AnsweringTrivia
AUC-F1 (Coverage)92.5
12
Selective Question AnsweringHotPot
Area under Coverage-F192.5
12
Natural Language InferenceMRPC
Accuracy0.736
5
Question Answering CalibrationSQuAD adversarial (test)
Accuracy70.3
5
Question Answering CalibrationTriviaQA (test)
Accuracy72
5
Question Answering CalibrationHotpotQA (test)
Accuracy65.7
5
Showing 8 of 8 rows

Other info

Code

Follow for update