Chunks as Arms: Multi-Armed Bandit-Guided Sampling for Long-Context LLM Preference Optimization
About
Long-context modeling is critical for a wide range of real-world tasks, including long-context question answering, summarization, and complex reasoning tasks. Recent studies have explored fine-tuning Large Language Models (LLMs) with synthetic data to enhance their long-context capabilities. However, the effectiveness of such approaches is often limited by the low diversity and factual inconsistencies in the generated data. To address these challenges, we propose LongMab, a novel framework that leverages a Multi-Armed Bandit (MAB) rollout strategy to identify the most informative chunks from the given long context for sampling high-quality and diverse responses and constructing preference data pairs for Direct Preference Optimization (DPO) training. Specifically, we treat context chunks as arms of MAB, select chunks based on their expected reward scores to input into LLMs to generate responses, and iteratively update these scores based on reward feedback. Both exploration and exploitation during the rollout process enable the LLM to focus on the most relevant context segments, thereby generating and collecting high-quality and diverse responses. Experimental results on both Llama and Qwen show the effectiveness of LongMab by achieving more than a 4% improvement on long-context reasoning benchmarks. All data and code will be released on https://github.com/NEUIR/LongMab-PO.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-context Question Answering | 2WikiMQA | SubEM79.5 | 36 | |
| Long-context Question Answering | NarrativeQA | SubEM22 | 36 | |
| Long-context Question Answering | En.QA | SubEM35.9 | 36 | |
| Long-context Question Answering | MFQA en | SubEM26 | 36 | |
| Long-context Understanding | MuSiQue | SubEM50 | 27 | |
| Long-context Question Answering | MuSiQue | F1 Score51.02 | 19 | |
| Long-context Understanding | Average Overall | SubEM40.95 | 18 | |
| Long-context Understanding | LV-Eval 16k | SubEM40 | 9 | |
| Long-context Understanding | LV-Eval 128k | SubEM17.5 | 9 | |
| Long-context Understanding | LV-Eval 32k | SubEM37.5 | 9 |