Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

What Deserves Memory: Adaptive Memory Distillation for LLM Agents

About

Memory systems for LLM agents struggle to determine what information deserves retention. Existing approaches rely on predefined heuristics such as importance scores, emotional tags, or factual templates, encoding designer intuition rather than learning from the data itself. Inspired by cognitive ideas, we propose NEMORI, an adaptive memory distillation framework that casts the assessment of experience's future utility as a matter of predictability. Specifically, NEMORI comprises two cascading modules: Episodic Memory Integration transforms raw interactions into coherent narratives, and Semantic Knowledge Distillation extracts insights via prediction error. Centering on distillation, the framework remains agnostic to downstream management. Extensive experiments confirm that NEMORI achieves strong performance, efficiency, and storage reduction. Our work suggests that observing the intrinsic properties of interaction sequences offers a viable, data-driven alternative to heuristic-based memory design. Code: https://github.com/nemori-ai/nemori.

Wenquan Ma, Jiayan Nan, Wenlong Wu, Yize Chen• 2025

Related benchmarks

TaskDatasetResultRank
Long-term memory evaluationLocomo
Overall F152.1
119
Long-context Question AnsweringLocomo
F1 (Multi Hop)32.36
109
Long-context Memory RetrievalLocomo
Single-hop84.9
70
Multi-hop Question AnsweringLocomo
F144.2
67
Single-hop Question AnsweringLocomo
F10.588
53
Open-domain Question AnsweringLocomo
F10.258
53
Long-context Memory EvaluationLongMemEval
Average Score74.6
52
Long-context reasoning and retrievalLoCoMo (test)
Single-Hop F187.04
37
Temporal Question AnsweringLocomo
F10.5838
36
Long-term memory evaluationLongMemEval S (test)
KU (Knowledge Update)79.5
27
Showing 10 of 35 rows

Other info

Follow for update