Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering

About

In this paper, we identify a critical problem, "lost-in-retrieval", in retrieval-augmented multi-hop question answering (QA): the key entities are missed in LLMs' sub-question decomposition. "Lost-in-retrieval" significantly degrades the retrieval performance, which disrupts the reasoning chain and leads to the incorrect answers. To resolve this problem, we propose a progressive retrieval and rewriting method, namely ChainRAG, which sequentially handles each sub-question by completing missing key entities and retrieving relevant sentences from a sentence graph for answer generation. Each step in our retrieval and rewriting process builds upon the previous one, creating a seamless chain that leads to accurate retrieval and answers. Finally, all retrieved sentences and sub-question answers are integrated to generate a comprehensive answer to the original question. We evaluate ChainRAG on three multi-hop QA datasets - MuSiQue, 2Wiki, and HotpotQA - using three large language models: GPT4o-mini, Qwen2.5-72B, and GLM-4-Plus. Empirical results demonstrate that ChainRAG consistently outperforms baselines in both effectiveness and efficiency.

Rongzhi Zhu, Xiangyu Liu, Zequn Sun, Yiwei Wang, Wei Hu• 2025

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA
F1 Score64.59
221
Multi-hop Question AnsweringHotpotQA (test)--
198
Multi-hop Question Answering2WikiMQA
F1 Score62.55
154
Multi-hop Question AnsweringMuSiQue (test)--
111
Multi-hop Question AnsweringHotpotQA
F164.59
48
Multi-hop Question Answering2Wiki
F1 Score70.58
41
Multi-hop Question Answering2Wiki (test)--
20
Showing 7 of 7 rows

Other info

Follow for update