Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs

About

Large language models (LLMs) have made significant advancements in various natural language processing tasks, including question answering (QA) tasks. While incorporating new information with the retrieval of relevant passages is a promising way to improve QA with LLMs, the existing methods often require additional fine-tuning which becomes infeasible with recent LLMs. Augmenting retrieved passages via prompting has the potential to address this limitation, but this direction has been limitedly explored. To this end, we design a simple yet effective framework to enhance open-domain QA (ODQA) with LLMs, based on the summarized retrieval (SuRe). SuRe helps LLMs predict more accurate answers for a given question, which are well-supported by the summarized retrieval that could be viewed as an explicit rationale extracted from the retrieved passages. Specifically, SuRe first constructs summaries of the retrieved passages for each of the multiple answer candidates. Then, SuRe confirms the most plausible answer from the candidate set by evaluating the validity and ranking of the generated summaries. Experimental results on diverse ODQA benchmarks demonstrate the superiority of SuRe, with improvements of up to 4.6% in exact match (EM) and 4.0% in F1 score over standard prompting approaches. SuRe also can be integrated with a broad range of retrieval methods and LLMs. Finally, the generated summaries from SuRe show additional advantages to measure the importance of retrieved passages and serve as more preferred rationales by models and humans.

Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, Jinwoo Shin• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA
F1 Score43.4
221
Multi-hop Question Answering2WikiMQA
F1 Score45.8
154
Multi-hop Question AnsweringHotpotQA
F152.8
48
Question AnsweringHotpotQA 296:204 (test)
Answerable EM51.69
20
Question AnsweringQASPER 1200:251 (test)
Answerable EM13.08
20
Multi-hop Question AnsweringMusique in-domain (test)
Accuracy R7.2
14
Multi-hop Question AnsweringBamboogle out-of-domain (test)
Accuracy (R)17.6
14
Multi-hop Question AnsweringHotpotQA in-domain (test)
ACC_R32.4
14
Multi-hop Question Answering2WikiMultiHopQA in-domain (test)
Accuracy (Response)22.2
14
Retrieval2WikiMQA (test)--
8
Showing 10 of 12 rows

Other info

Follow for update