Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering

About

Multi-Hop Question Answering (MHQA) tasks present a significant challenge for large language models (LLMs) due to the intensive knowledge required. Current solutions, like Retrieval-Augmented Generation, typically retrieve potential documents from an external corpus to read an answer. However, the performance of this retrieve-then-read paradigm is constrained by the retriever and the inevitable noise in the retrieved documents. To mitigate these challenges, we introduce a novel generate-then-ground (GenGround) framework, synergizing the parametric knowledge of LLMs and external documents to solve a multi-hop question. GenGround empowers LLMs to alternate two phases until the final answer is derived: (1) formulate a simpler, single-hop question and directly generate the answer; (2) ground the question-answer pair in retrieved documents, amending any wrong predictions in the answer. We also propose an instructional grounding distillation method to generalize our method into smaller models. Extensive experiments conducted on four datasets illustrate the superiority of our method.

Zhengliang Shi, Weiwei Sun, Shen Gao, Pengjie Ren, Zhumin Chen, Zhaochun Ren• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA (test)
F162.37
198
Multi-hop Question Answering2WikiMultiHopQA (test)--
143
Multi-hop Question AnsweringMuSiQue (test)
F127.36
111
Multi-hop Question AnsweringStrategyQA (test)
Accuracy77.12
26
Complex engineering solution designSolutionBench 1.0 (test)
Environmental Score (AS)54.8
11
Question Answering CorrectnessHuman Evaluation (120 randomly sampled cases from HotpotQA, 2WikiMultiHopQA, MuSiQue, and DuReader)
Accuracy52.75
4
Question Answering with Unanswerable QuestionsMuSiQue Full (test sampled from val)
Accuracy56
3
Showing 7 of 7 rows

Other info

Follow for update