Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning

About

In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning. However, existing literature has highlighted the sensitivity of this capability to the selection of few-shot demonstrations. Current understandings of the underlying mechanisms by which this capability arises from regular language model pretraining objectives remain disconnected from the real-world LLMs. This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models. On this premise, we propose an algorithm to select optimal demonstrations from a set of annotated data with a small LM, and then directly generalize the selected demonstrations to larger LMs. We demonstrate significant improvement over baselines, averaged over eight GPT models on eight real-world text classification datasets. We also demonstrate the real-world usefulness of our algorithm on GSM8K, a math word problem dataset. Our empirical findings support our hypothesis that LLMs implicitly infer a latent variable containing task information.

Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, William Yang Wang• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy86.72
1460
Natural Language InferenceRTE
Accuracy56.3
367
Reading ComprehensionBoolQ
Accuracy77.6
219
Natural Language InferenceSNLI
Accuracy43.5
174
Topic ClassificationAG-News
Accuracy67.3
173
Sentiment AnalysisSST-2
Accuracy88.8
156
Common Sense ReasoningCOPA
Accuracy83
138
Text ClassificationSST-2
Accuracy90.3
121
Text ClassificationAGNews
Accuracy61
119
Topic ClassificationDBpedia
Accuracy75.5
117
Showing 10 of 41 rows

Other info

Code

Follow for update