Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

True Few-Shot Learning with Language Models

About

Pretrained language models (LMs) perform well on many tasks even when learning from a few examples, but prior work uses many held-out examples to tune various aspects of learning, such as hyperparameters, training objectives, and natural language templates ("prompts"). Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning. We test two model selection criteria, cross-validation and minimum description length, for choosing LM prompts and hyperparameters in the true few-shot setting. On average, both marginally outperform random selection and greatly underperform selection based on held-out examples. Moreover, selection criteria often prefer models that perform significantly worse than randomly-selected ones. We find similar results even when taking into account our uncertainty in a model's true performance during selection, as well as when varying the amount of computation and number of examples used for selection. Overall, our findings suggest that prior work significantly overestimated the true few-shot ability of LMs given the difficulty of few-shot model selection.

Ethan Perez, Douwe Kiela, Kyunghyun Cho• 2021

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy (GSM8K)37.5
358
SummarizationXSum (test)
ROUGE-28.3
231
Question AnsweringTriviaQA
Accuracy68.7
210
Subjectivity ClassificationSubj (test)
Accuracy60.1
125
Question AnsweringTriviaQA (test)
Accuracy65.8
121
Question AnsweringSQuAD (test)--
111
SummarizationXsum
ROUGE-213
108
Question AnsweringSQuAD
Exact Match66.1
50
Data-to-text generationWebNLG (test)--
39
Boolean Question AnsweringBoolQ (test)
Accuracy (Avg)64.8
38
Showing 10 of 21 rows

Other info

Follow for update