Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative Optimization and Generation

About

Recent advances in long-context large language models (LLMs) have led to the emerging paradigm of many-shot in-context learning (ICL), where it is observed that scaling many more demonstrating examples beyond the conventional few-shot setup in the context can lead to performance benefits. However, despite its promise, it is unclear what aspects dominate the benefits and whether simply scaling to more examples is the most effective way of improving many-shot ICL. In this work, we first provide an analysis of the factors driving many-shot ICL, and we find that 1) many-shot performance can still be attributed to often a few disproportionately influential examples and 2) identifying such influential examples ("optimize") and using them as demonstrations to regenerate new examples ("generate") can lead to further improvements. Inspired by the findings, we propose BRIDGE, an algorithm that alternates between the optimize step with Bayesian optimization to discover the influential sets of examples and the generate step to reuse this set to expand the reasoning paths of the examples back to the many-shot regime automatically. On Gemini, Claude, and Mistral LLMs of different sizes, we show that BRIDGE to significant improvements across a diverse set of tasks, including symbolic reasoning, numerical reasoning, and code generation.

Xingchen Wan, Han Zhou, Ruoxi Sun, Hootan Nakhost, Ke Jiang, Sercan \"O. Ar{\i}k• 2025

Related benchmarks

TaskDatasetResultRank
Automated GradingDI 11 categories
Accuracy90
6
Automated GradingDC 4 tasks
Accuracy0.76
6
Automated GradingDT 4 tasks
Accuracy66
6
Showing 3 of 3 rows

Other info

Follow for update