Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

About

Large language models (LMs) are able to in-context learn -- perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to end task performance. In this paper, we show that ground truth demonstrations are in fact not required -- randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3. Instead, we find that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of (1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence. Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone.

Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceRTE
Accuracy60
367
Subjectivity ClassificationSubj
Accuracy64.9
266
Sentiment ClassificationSST2 (test)
Accuracy93.9
214
Sentiment ClassificationSST-2
Accuracy80.1
174
Sentiment AnalysisSST-5 (test)
Accuracy41.8
173
Topic ClassificationAG-News
Accuracy66.2
173
Sentiment ClassificationMR
Accuracy73.1
148
Sentiment ClassificationMR (test)
Accuracy87.3
142
Sentiment ClassificationCR (test)
Mean Accuracy82.3
58
Sentiment ClassificationYelp (test)
Accuracy94.5
46
Showing 10 of 18 rows

Other info

Follow for update