Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Recitation-Augmented Language Models

About

We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE). Different from retrieval-augmented language models that retrieve relevant documents before generating the outputs, given an input, RECITE first recites one or several relevant passages from LLMs' own memory via sampling, and then produces the final answers. We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks. Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance in various closed-book question answering (CBQA) tasks. In experiments, we verify the effectiveness of \method~on four pre-trained models (PaLM, UL2, OPT, and Codex) and three CBQA tasks (Natural Questions, TriviaQA, and HotpotQA). Our code is available at "https://github.com/Edward-Sun/RECITE".

Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, Denny Zhou• 2022

Related benchmarks

TaskDatasetResultRank
Multi-hop Question Answering2WikiMultihopQA--
387
Multi-hop Question AnsweringHotpotQA (test)--
255
Multi-hop Question AnsweringMuSiQue--
185
Question AnsweringHotpotQA
EM37.1
109
Citation-augmented Question Answeringbar-GT, PK 1.0 (test)
Accuracy61.49
42
Question AnsweringHotpotQA (test)
Ans EM37.1
37
Long-form Question AnsweringELI5--
32
Truthfulness EvaluationTruthfulQA (test)
MC149.79
30
Citation-augmented Question AnsweringGT, PK 1.0 (test)
Accuracy61.38
21
Honesty EvaluationFActScore v1.0
Score46.3
20
Showing 10 of 14 rows

Other info

Follow for update