Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hit-RAG: Learning to Reason with Long Contexts via Preference Alignment

About

Despite the promise of Retrieval-Augmented Generation in grounding Multimodal Large Language Models with external knowledge, the transition to extensive contexts often leads to significant attention dilution and reasoning hallucinations. The surge in information density causes critical evidence to be submerged by voluminous noise, which complicates the discernment of relevant fragments within a dense input. In this paper, we propose \textbf{Hit-RAG}, a multi-stage preference alignment framework designed to resolve these cognitive bottlenecks through a progressive optimization pipeline. Our approach systematically refines the utilization of external evidence via three distinct stages. First, Supervised Fine-tuning establishes baseline context awareness to minimize information neglect. Next, Discriminative Preference Alignment enhances robustness against misleading distractors. Finally, Group-Relative Policy Optimization stabilizes logical synthesis to prevent reasoning collapse. Extensive evaluations on eight benchmarks demonstrate that Hit-RAG consistently yields substantial performance gains, enabling models to bridge the gap between context acquisition and accurate reasoning while surpassing much larger counterparts in long-context scenarios.

Junming Liu, Yuqi Li, Shiping Wen, Zhigang Zeng, Tingwen Huang• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC
Accuracy86.2
230
Question AnsweringHotpotQA
F169.7
128
Question AnsweringTQA
Accuracy84.6
74
Science Question AnsweringScienceQA
IMG Score87.41
64
Question AnsweringASQA--
51
Question AnsweringPopQA
Accuracy (Acc)70.7
26
Question AnsweringPub
Accuracy84.1
22
Question AnsweringBio
Few-Shot Accuracy84.3
17
Document UnderstandingDocVQA
Accuracy60.94
3
Knowledge-based Question AnsweringOK-VQA + A-OKVQA
Accuracy87.31
3
Showing 10 of 10 rows

Other info

Follow for update