Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning

About

Large Language Models (LLMs) have demonstrated impressive capabilities across a wide range of NLP tasks, but they remain fundamentally stateless, constrained by limited context windows that hinder long-horizon reasoning. Recent efforts to address this limitation often augment LLMs with an external memory bank, yet most existing pipelines are static and heuristic-driven, lacking a learned mechanism for deciding what to store, update, or retrieve. We present Memory-R1, a reinforcement learning (RL) framework that equips LLMs with the ability to actively manage and utilize external memory through two specialized agents: a Memory Manager that learns structured operations, including ADD, UPDATE, DELETE, and NOOP; and an Answer Agent that pre-selects and reasons over relevant entries. Both agents are fine-tuned with outcome-driven RL (PPO and GRPO), enabling adaptive memory management with minimal supervision. With only 152 training QA pairs, Memory-R1 outperforms strong baselines and generalizes across diverse question types, three benchmarks (LoCoMo, MSC, LongMemEval), and multiple model scales (3B-14B).

Sikuan Yan, Xiufeng Yang, Zuchao Huang, Ercong Nie, Zifeng Ding, Zonggen Li, Xiaowen Ma, Jinhe Bi, Kristian Kersting, Jeff Z. Pan, Hinrich Sch\"utze, Volker Tresp, Yunpu Ma• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringLoCoMo Single-Hop
F1 Score47.55
19
Question AnsweringLoCoMo Temporal
F1 Score40.96
19
Long-term dialogue memoryLoCoMo (test)
Accuracy62.74
15
Question AnsweringLoCoMo Overall
BLEU-126.06
11
Long-term memory evaluationLongMemEval single run
F1 Score46.7
7
Showing 5 of 5 rows

Other info

Follow for update