Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MemCoT: Test-Time Scaling through Memory-Driven Chain-of-Thought

About

Large Language Models (LLMs) still suffer from severe hallucinations and catastrophic forgetting during causal reasoning over massive, fragmented long contexts. Existing memory mechanisms typically treat retrieval as a static, single-step passive matching process, leading to severe semantic dilution and contextual fragmentation. To overcome these fundamental bottlenecks, we propose MemCoT, a test-time memory scaling framework that redefines the reasoning process by transforming long-context reasoning into an iterative, stateful information search. MemCoT introduces a multi-view long-term memory perception module that enables Zoom-In evidence localization and Zoom-Out contextual expansion, allowing the model to first identify where relevant evidence resides and then reconstruct the surrounding causal structure necessary for reasoning. In addition, MemCoT employs a task-conditioned dual short-term memory system composed of semantic state memory and episodic trajectory memory. This short-term memory records historical search decisions and dynamically guides query decomposition and pruning across iterations. Empirical evaluations demonstrate that MemCoT establishes a state-of-the-art performance. Empowered by MemCoT, several open- and closed-source models achieve SOTA performance on the LoCoMo benchmark and LongMemEval-S benchmark.

Haodong Lei, Junming Liu, Yirong Chen, Ding Wang, Hongsong Wang• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringLocomo
F1 (Multi Hop)45.1
109
Question AnsweringLocomo
Single Hop F166.42
38
Long-term memory evaluationLongMemEvalS
Overall Score88
16
Showing 3 of 3 rows

Other info

Follow for update