Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation

About

Large language models (LLMs) exhibit remarkable generative capabilities but often suffer from hallucinations. Retrieval-augmented generation (RAG) offers an effective solution by incorporating external knowledge, but existing methods still face several limitations: additional deployment costs of separate retrievers, redundant input tokens from retrieved text chunks, and the lack of joint optimization of retrieval and generation. To address these issues, we propose \textbf{RetroLLM}, a unified framework that integrates retrieval and generation into a single, cohesive process, enabling LLMs to directly generate fine-grained evidence from the corpus with constrained decoding. Moreover, to mitigate false pruning in the process of constrained evidence generation, we introduce (1) hierarchical FM-Index constraints, which generate corpus-constrained clues to identify a subset of relevant documents before evidence generation, reducing irrelevant decoding space; and (2) a forward-looking constrained decoding strategy, which considers the relevance of future sequences to improve evidence accuracy. Extensive experiments on five open-domain QA datasets demonstrate RetroLLM's superior performance across both in-domain and out-of-domain tasks. The code is available at \url{https://github.com/sunnynexus/RetroLLM}.

Xiaoxi Li, Jiajie Jin, Yujia Zhou, Yongkang Wu, Zhonghua Li, Qi Ye, Zhicheng Dou• 2024

Related benchmarks

TaskDatasetResultRank
Open-domain Question AnsweringNQ--
20
Open-domain Question AnsweringHotpotQA
Accuracy61.9
11
Open-domain Question AnsweringPopQA
Accuracy65.7
11
Open-domain Question Answering2Wiki
Accuracy48.9
11
Open-domain Question AnsweringTriviaQA
Accuracy74.3
11
Showing 5 of 5 rows

Other info

Code

Follow for update