Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

About

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.

Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\"uttler, Mike Lewis, Wen-tau Yih, Tim Rockt\"aschel, Sebastian Riedel, Douwe Kiela• 2020

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy95.1
797
Multi-hop Question Answering2WikiMultihopQA
EM57.9
278
Medical Question AnsweringMedMCQA
Accuracy61.49
253
Multi-hop Question AnsweringHotpotQA
F1 Score64.5
221
Long-context Language UnderstandingLongBench
M-Avg25.04
219
Question AnsweringTriviaQA
Accuracy77.4
210
Multi-hop Question AnsweringHotpotQA (test)
F165.1
198
Question AnsweringPopQA
Accuracy36
186
Multi-hop Question Answering2WikiMQA
F1 Score52.11
154
Question AnsweringPubMedQA
Accuracy79.6
145
Showing 10 of 403 rows
...

Other info

Code

Follow for update