Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Metacognitive Retrieval-Augmented Large Language Models

About

Retrieval-augmented generation have become central in natural language processing due to their efficacy in generating factual content. While traditional methods employ single-time retrieval, more recent approaches have shifted towards multi-time retrieval for multi-hop reasoning tasks. However, these strategies are bound by predefined reasoning steps, potentially leading to inaccuracies in response generation. This paper introduces MetaRAG, an approach that combines the retrieval-augmented generation process with metacognition. Drawing from cognitive psychology, metacognition allows an entity to self-reflect and critically evaluate its cognitive processes. By integrating this, MetaRAG enables the model to monitor, evaluate, and plan its response strategies, enhancing its introspective reasoning abilities. Through a three-step metacognitive regulation pipeline, the model can identify inadequacies in initial cognitive responses and fixes them. Empirical evaluations show that MetaRAG significantly outperforms existing methods.

Yujia Zhou, Zheng Liu, Jiajie Jin, Jian-Yun Nie, Zhicheng Dou• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question Answering2WikiMQA
F1 Score58.7
154
Multi-hop Question AnsweringHotpotQA
F174.6
79
Multi-hop Question AnsweringWebQ 2013 (test)
F1 Score48.2
8
Single-hop Question AnsweringNQ 2019 (test)
F1 Score61.1
8
Showing 4 of 4 rows

Other info

Follow for update