Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Corrective Retrieval Augmented Generation

About

Large language models (LLMs) inevitably exhibit hallucinations since the accuracy of generated texts cannot be secured solely by the parametric knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a practicable complement to LLMs, it relies heavily on the relevance of retrieved documents, raising concerns about how the model behaves if retrieval goes wrong. To this end, we propose the Corrective Retrieval Augmented Generation (CRAG) to improve the robustness of generation. Specifically, a lightweight retrieval evaluator is designed to assess the overall quality of retrieved documents for a query, returning a confidence degree based on which different knowledge retrieval actions can be triggered. Since retrieval from static and limited corpora can only return sub-optimal documents, large-scale web searches are utilized as an extension for augmenting the retrieval results. Besides, a decompose-then-recompose algorithm is designed for retrieved documents to selectively focus on key information and filter out irrelevant information in them. CRAG is plug-and-play and can be seamlessly coupled with various RAG-based approaches. Experiments on four datasets covering short- and long-form generation tasks show that CRAG can significantly improve the performance of RAG-based approaches.

Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, Zhen-Hua Ling• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC-C
Accuracy68.6
166
Question AnsweringTriviaQA (test)
Accuracy59.6
121
Question AnsweringNQ (test)--
66
Question AnsweringPopQA (test)
Accuracy54.9
39
Multi-hop Question Answering2WikiMultiHopQA N=200
Judge EM64
24
Multi-hop Question AnsweringHotpotQA N=1,000 (test)
F1 Score0.44
23
Multi-hop Question AnsweringHotpotQA N=1,000
Judge EM58
16
Question AnsweringRGB (test)
Accuracy92
11
Showing 8 of 8 rows

Other info

Follow for update