Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Chunks as Arms: Multi-Armed Bandit-Guided Sampling for Long-Context LLM Preference Optimization

About

Long-context modeling is critical for a wide range of real-world tasks, including long-context question answering, summarization, and complex reasoning tasks. Recent studies have explored fine-tuning Large Language Models (LLMs) with synthetic data to enhance their long-context capabilities. However, the effectiveness of such approaches is often limited by the low diversity and factual inconsistencies in the generated data. To address these challenges, we propose LongMab, a novel framework that leverages a Multi-Armed Bandit (MAB) rollout strategy to identify the most informative chunks from the given long context for sampling high-quality and diverse responses and constructing preference data pairs for Direct Preference Optimization (DPO) training. Specifically, we treat context chunks as arms of MAB, select chunks based on their expected reward scores to input into LLMs to generate responses, and iteratively update these scores based on reward feedback. Both exploration and exploitation during the rollout process enable the LLM to focus on the most relevant context segments, thereby generating and collecting high-quality and diverse responses. Experimental results on both Llama and Qwen show the effectiveness of LongMab by achieving more than a 4% improvement on long-context reasoning benchmarks. All data and code will be released on https://github.com/NEUIR/LongMab-PO.

Shaohua Duan, Pengcheng Huang, Xinze Li, Zhenghao Liu, Xiaoyuan Yi, Yukun Yan, Shuo Wang, Yu Gu, Ge Yu, Maosong Sun• 2025

Related benchmarks

TaskDatasetResultRank
Long-context Question Answering2WikiMQA
SubEM79.5
36
Long-context Question AnsweringNarrativeQA
SubEM22
36
Long-context Question AnsweringEn.QA
SubEM35.9
36
Long-context Question AnsweringMFQA en
SubEM26
36
Long-context UnderstandingMuSiQue
SubEM50
27
Long-context Question AnsweringMuSiQue
F1 Score51.02
19
Long-context UnderstandingAverage Overall
SubEM40.95
18
Long-context UnderstandingLV-Eval 16k
SubEM40
9
Long-context UnderstandingLV-Eval 128k
SubEM17.5
9
Long-context UnderstandingLV-Eval 32k
SubEM37.5
9
Showing 10 of 12 rows

Other info

Follow for update