Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference

About

In-Context Learning (ICL), which formulates target tasks as prompt completion conditioned on in-context demonstrations, has become the prevailing utilization of LLMs. In this paper, we first disclose an actual predicament for this typical usage that it can not scale up with training data due to context length restriction. Besides, existing works have shown that ICL also suffers from various biases and requires delicate calibration treatment. To address both challenges, we advocate a simple and effective solution, $k$NN Prompting, which first queries LLM with training data for distributed representations, then predicts test instances by simply referring to nearest neighbors. We conduct comprehensive experiments to demonstrate its two-fold superiority: 1) Calibration-Free: $k$NN Prompting does not directly align LLM output distribution with task-specific label space, instead leverages such distribution to align test and training instances. It significantly outperforms state-of-the-art calibration-based methods under comparable few-shot scenario. 2) Beyond-Context: $k$NN Prompting can further scale up effectively with as many training data as are available, continually bringing substantial improvements. The scaling trend holds across 10 orders of magnitude ranging from 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8B to 30B. It successfully bridges data scaling into model scaling, and brings new potentials for the gradient-free paradigm of LLM deployment. Code is publicly available.

Benfeng Xu, Quan Wang, Zhendong Mao, Yajuan Lyu, Qiaoqiao She, Yongdong Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceRTE
Accuracy83.6
367
Subjectivity ClassificationSubj
Accuracy95.5
266
Question ClassificationTREC
Accuracy90.5
205
Topic ClassificationAG-News
Accuracy89.6
173
Sentiment AnalysisSST-2
Accuracy94.6
156
Opinion Polarity DetectionMPQA
Accuracy84.6
154
Sentiment AnalysisMR
Accuracy0.931
142
Sentiment AnalysisCR
Accuracy93.7
123
Topic ClassificationDBpedia
Accuracy99.1
117
Natural Language InferenceCB
Accuracy94.3
110
Showing 10 of 13 rows

Other info

Follow for update