Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners

About

When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much "greener" in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models.

Timo Schick, Hinrich Sch\"utze• 2020

Related benchmarks

TaskDatasetResultRank
Text ClassificationTREC (test)
Accuracy85
113
Natural Language InferenceMNLI (matched)
Accuracy71.2
110
Natural Language InferenceMNLI (mismatched)
Accuracy71.8
68
Natural Language InferenceQNLI (test)
Accuracy70.3
27
Natural Language UnderstandingSuperGLUE few-shot
BoolQ Accuracy0.783
16
Event DetectionACE05 2-shot (test)
F1 Score38.4
13
ClassificationMRPC (test)
Macro F170.4
9
Recognizing Textual EntailmentFewGLUE RTE few-shot (32 examples) (dev)
Accuracy74
6
Textual EntailmentFewGLUE CB (CommitmentBank) few-shot (32 examples) (dev)
F1 Score92.4
6
Event DetectionMAVEN 5-shot (test)
F1 Score46
6
Showing 10 of 12 rows

Other info

Follow for update