Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory

About

Statefulness is essential for large language model (LLM) agents to perform long-term planning and problem-solving. This makes memory a critical component, yet its management and evolution remain largely underexplored. Existing evaluations mostly focus on static conversational settings, where memory is passively retrieved from dialogue to answer queries, overlooking the dynamic ability to accumulate and reuse experience across evolving task streams. In real-world environments such as interactive problem assistants or embodied agents, LLMs are required to handle continuous task streams, yet often fail to learn from accumulated interactions, losing valuable contextual insights, a limitation that calls for test-time evolution, where LLMs retrieve, integrate, and update memory continuously during deployment. To bridge this gap, we introduce Evo-Memory, a comprehensive streaming benchmark and framework for evaluating self-evolving memory in LLM agents. Evo-Memory structures datasets into sequential task streams, requiring LLMs to search, adapt, and evolve memory after each interaction. We unify and implement over ten representative memory modules and evaluate them across 10 diverse multi-turn goal-oriented and single-turn reasoning and QA datasets. To better benchmark experience reuse, we provide a baseline method, ExpRAG, for retrieving and utilizing prior experience, and further propose ReMem, an action-think-memory refine pipeline that tightly integrates reasoning, task actions, and memory updates to achieve continual improvement.

Tianxin Wei, Noveen Sachdeva, Benjamin Coleman, Zhankui He, Yuanchen Bei, Xuying Ning, Mengting Ai, Yunzhe Li, Jingrui He, Ed H. Chi, Chi Wang, Shuo Chen, Fernando Pereira, Wang-Cheng Kang, Derek Zhiyuan Cheng• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH (test)
Overall Accuracy88.2
433
Multi-hop Question AnsweringHotpotQA--
221
Science Question AnsweringGPQA
pass@1 Accuracy70.2
85
Financial Question AnsweringFinQA (test)
Accuracy61.5
42
Multi-turn Medical DiagnosisMed-Inquire DiagnosisArena (915 cases)
Mean Diagnostic Grade52
36
Mathematical ReasoningCHAMP standard (test)
Accuracy40.7
36
Mathematical Problem SolvingAIME
AIME Score61.67
35
Embodied decision-makingAlfWorld
Success Rate73.13
31
Science Question AnsweringGPQA (test)
Accuracy65.7
24
Humanities Question AnsweringHLE
HLE Score10.7
24
Showing 10 of 20 rows

Other info

Follow for update