Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Calibrate Before Use: Improving Few-Shot Performance of Language Models

About

GPT-3 can perform numerous tasks when provided a natural language prompt that contains a few training examples. We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art. We demonstrate that this instability arises from the bias of language models towards predicting certain answers, e.g., those that are placed near the end of the prompt or are common in the pre-training data. To mitigate this, we first estimate the model's bias towards each answer by asking for its prediction when given the training prompt and a content-free test input such as "N/A". We then fit calibration parameters that cause the prediction for this input to be uniform across answers. On a diverse set of tasks, this contextual calibration procedure substantially improves GPT-3 and GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across different choices of the prompt.

Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh• 2021

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy50.84
842
Natural Language UnderstandingGLUE (test)--
416
Natural Language InferenceRTE
Accuracy71.99
367
Text ClassificationAG-News
Accuracy85.9
248
Text ClassificationAG News (test)--
210
Question ClassificationTREC
Accuracy83.8
205
Topic ClassificationAG-News
Accuracy88.23
173
Question AnsweringARC
Accuracy64.33
154
Sentiment AnalysisMR
Accuracy0.932
142
Subjectivity ClassificationSubj (test)
Accuracy70.4
125
Showing 10 of 115 rows
...

Other info

Follow for update