Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning

About

In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performance in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misaligned. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization-based framework, which learns an optimal, per-class affine transformation of LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases but also enables the ability of altering and even completely reversing the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques, context-invariance and directional trust-region regularizers. The former is designed to tackle the instability issue in ICL, while the latter is to control the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, Llama-2-7B-chat, and Qwen2-7B-Instruct.

Korel Gundem, Juncheng Dong, Dennis Zhang, Vahid Tarokh, Zhengling Qi• 2025

Related benchmarks

TaskDatasetResultRank
Subjectivity ClassificationSubj
Accuracy70.86
329
Question ClassificationTREC
Accuracy73.98
259
Topic ClassificationAG-News
Accuracy87.81
225
Text ClassificationTREC
Accuracy69.06
207
Sentiment AnalysisSST-5
Accuracy47.27
106
Text ClassificationSST2
Accuracy95.39
71
Sentiment AnalysisFPB
Accuracy85.78
65
Text ClassificationAGNews
Accuracy80.23
61
Text ClassificationSST-5
Accuracy48.52
52
Text ClassificationSubj
CA (%)72.5
48
Showing 10 of 36 rows

Other info

Follow for update