Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FlashMem: Distilling Intrinsic Latent Memory via Computation Reuse

About

The stateless architecture of Large Language Models inherently lacks the mechanism to preserve dynamic context, compelling agents to redundantly reprocess history to maintain long-horizon autonomy. While latent memory offers a solution, current approaches are hindered by architectural segregation, relying on auxiliary encoders that decouple memory from the reasoning backbone. We propose FlashMem, a framework that distills intrinsic memory directly from transient reasoning states via computation reuse. Leveraging the property that internal representations uniquely encode input trajectories, FlashMem identifies the last hidden state as a sufficient statistic for the interaction history. This enables a Shared-KV Consolidator to synthesize memory by attending directly to the backbone's frozen cache, eliminating redundant re-parameterization. Furthermore, a parameter-free Cognitive Monitor leverages attention entropy to adaptively trigger consolidation only when high epistemic uncertainty is detected. Experiments demonstrate that FlashMem matches the performance of heavy baselines while reducing inference latency by 5 times, effectively bridging the gap between efficiency and persistent cognition.

Yubo Hou, Zhisheng Chen, Tao Wan, Zengchang Qin• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy70.46
983
Mathematical ReasoningGSM8K (test)
Accuracy84.38
797
Mathematical ReasoningMATH
Accuracy51.56
535
Mathematical ReasoningMATH (test)
Overall Accuracy58.36
433
Science ReasoningGPQA (test)
Accuracy20.54
41
Code GenerationKodCode
Accuracy54.27
38
Long document summarizationBookSum (test)
ROUGE 113.77
37
Document SummarizationBookSum
ROUGE-1 Score14.65
22
Mathematical & Scientific ReasoningGPQA
Accuracy17.86
19
Code GenerationKodCode (test)
Accuracy61.13
10
Showing 10 of 12 rows

Other info

Follow for update