Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Know More, Know Clearer: A Meta-Cognitive Framework for Knowledge Augmentation in Large Language Models

About

Knowledge augmentation has significantly enhanced the performance of Large Language Models (LLMs) in knowledge-intensive tasks. However, existing methods typically operate on the simplistic premise that model performance equates with internal knowledge, overlooking the knowledge-confidence gaps that lead to overconfident errors or uncertain truths. To bridge this gap, we propose a novel meta-cognitive framework for reliable knowledge augmentation via differentiated intervention and alignment. Our approach leverages internal cognitive signals to partition the knowledge space into mastered, confused, and missing regions, guiding targeted knowledge expansion. Furthermore, we introduce a cognitive consistency mechanism to synchronize subjective certainty with objective accuracy, ensuring calibrated knowledge boundaries. Extensive experiments demonstrate the our framework consistently outperforms strong baselines, validating its rationality in not only enhancing knowledge capabilities but also fostering cognitive behaviors that better distinguish knowns from unknowns.

Hao Chen, Ye He, Yuchun Fan, Yukun Yan, Zhenghao Liu, Qingfu Zhu, Maosong Sun, Wanxiang Che• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringNQ
Accuracy45.7
108
Question Answering2Wiki--
75
Question AnsweringWebQuestions (WebQs)
Accuracy52.51
67
CalibrationNQ
ECE0.2401
55
Question AnsweringPopQA
Score43.93
50
CalibrationSQuAD
ECE30.29
31
CalibrationWebQ
ECE14.85
31
Mathematical ReasoningGSM8K
Accuracy61.18
29
Question AnsweringTriQA
Accuracy83.43
21
Question AnsweringPopQA
Accuracy35.2
16
Showing 10 of 27 rows

Other info

Follow for update