Coverage-based Example Selection for In-Context Learning
About
In-context learning (ICL), the ability of large language models to perform novel tasks by conditioning on a prompt with a few task examples, requires these examples to be informative about the test instance. The standard approach of independently ranking and selecting the most similar examples selects redundant examples while omitting important information. In this work, we show that BERTScore-Recall (BSR) selects better examples that demonstrate more of the salient aspects, e.g. reasoning patterns, of the test input. We further extend BSR and many standard metrics to easily optimizable set-level metrics, giving still better coverage of those salient aspects. On 15 datasets spanning 6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metric for in-context example selection across the board, and (2) for compositional tasks, set selection using Set-BSR outperforms independent ranking by up to 17 points on average and, despite being training-free, surpasses methods that leverage task or LLM-specific training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Inference | SNLI (test) | Accuracy79.9 | 690 | |
| Reasoning | GSM PRO | Accuracy100 | 72 | |
| Reasoning | GSM→FOL | Accuracy83.6 | 45 | |
| Mathematical Reasoning | GSM8K PRO | Accuracy94.6 | 18 | |
| Mathematical Reasoning | FOLIO to GSM8K | Accuracy94.6 | 18 | |
| Natural Language Inference | NLI adversarial benchmark (test) | Average Score59.7 | 18 |