Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Calibration of Pre-trained Transformers

About

Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated. Specifically, do these models' posterior probabilities provide an accurate empirical measure of how likely the model is to be correct on a given example? We focus on BERT and RoBERTa in this work, and analyze their calibration across three tasks: natural language inference, paraphrase detection, and commonsense reasoning. For each task, we consider in-domain as well as challenging out-of-domain settings, where models face more examples they should be uncertain about. We show that: (1) when used out-of-the-box, pre-trained models are calibrated in-domain, and compared to baselines, their calibration error out-of-domain can be as much as 3.5x lower; (2) temperature scaling is effective at further reducing calibration error in-domain, and using label smoothing to deliberately increase empirical uncertainty helps calibrate posteriors out-of-domain.

Shrey Desai, Greg Durrett• 2020

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST-5
Accuracy69.73
31
Natural Language InferenceHANS
Accuracy55.06
23
Natural Language InferenceaNLI
Accuracy31.31
18
Toxicity DetectionHate Speech
Accuracy75.52
10
Natural Language InferenceMNLI
Accuracy86.5
10
Sentiment ClassificationSemEval
Accuracy55.03
10
Toxicity DetectionCivil
Accuracy86.08
10
Toxicity DetectionImplicit Hate
Accuracy60.64
10
Sentiment ClassificationAMAZON
Accuracy91
10
Commonsense ReasoningSWAG In-Domain (test)
Accuracy82.45
8
Showing 10 of 15 rows

Other info

Follow for update