Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Survey on In-context Learning

About

With the increasing capabilities of large language models (LLMs), in-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP), where LLMs make predictions based on contexts augmented with a few examples. It has been a significant trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis. Additionally, we explore various ICL application scenarios, such as data engineering and knowledge updating. Finally, we address the challenges of ICL and suggest potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL.

Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, Baobao Chang, Xu Sun, Lei Li, Zhifang Sui• 2022

Related benchmarks

TaskDatasetResultRank
Intent ClassificationBanking77 (test)
Accuracy83.9
151
Chinese Spelling CorrectionCSCD-NS
Sentence Correction F1 Score54.28
35
Intent ClassificationClinc150 (test)
Accuracy91.6
26
Long-context language understanding suiteZeroSCROLLS
GovReport Score26.7
24
Intent ClassificationHWU64 (test)
Accuracy84.1
9
Intent ClassificationLIU54 (test)
Accuracy71.1
9
Reinforcement LearningAnt-Dir Medium OOD
Average Return0.00e+0
8
Reinforcement LearningAnt-Dir Expert IID
Average Return78
8
Reinforcement LearningAnt-Dir Expert OOD
Average Return1
8
Reinforcement LearningAnt-Dir Random IID
Average Return1
8
Showing 10 of 13 rows

Other info

Follow for update