Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Disentangling Memory and Reasoning Ability in Large Language Models

About

Large Language Models (LLMs) have demonstrated strong performance in handling complex tasks requiring both extensive knowledge and reasoning abilities. However, the existing LLM inference pipeline operates as an opaque process without explicit separation between knowledge retrieval and reasoning steps, making the model's decision-making process unclear and disorganized. This ambiguity can lead to issues such as hallucinations and knowledge forgetting, which significantly impact the reliability of LLMs in high-stakes domains. In this paper, we propose a new inference paradigm that decomposes the complex inference process into two distinct and clear actions: (1) memory recall: which retrieves relevant knowledge, and (2) reasoning: which performs logical steps based on the recalled knowledge. To facilitate this decomposition, we introduce two special tokens memory and reason, guiding the model to distinguish between steps that require knowledge retrieval and those that involve reasoning. Our experiment results show that this decomposition not only improves model performance but also enhances the interpretability of the inference process, enabling users to identify sources of error and refine model responses effectively. The code is available at https://github.com/MingyuJ666/Disentangling-Memory-and-Reasoning.

Mingyu Jin, Weidi Luo, Sitao Cheng, Xinyi Wang, Wenyue Hua, Ruixiang Tang, William Yang Wang, Yongfeng Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringCommonsenseQA
Accuracy83.2
143
Question AnsweringStrategyQA
Accuracy78.6
114
Question AnsweringTruthfulQA
Accuracy86.6
82
Memory & ReasoningStrategyQA multi-round
Accuracy70.1
6
Memory & ReasoningComQA multi-round
Accuracy71.3
6
Memory & ReasoningTruthQA multi-round
Accuracy69.2
6
Showing 6 of 6 rows

Other info

Code

Follow for update