Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MemPO: Self-Memory Policy Optimization for Long-Horizon Agents

About

Long-horizon agents face the challenge of growing context size during interaction with environment, which degrades the performance and stability. Existing methods typically introduce the external memory module and look up the relevant information from the stored memory, which prevents the model itself from proactively managing its memory content and aligning with the agent's overarching task objectives. To address these limitations, we propose the self-memory policy optimization algorithm (MemPO), which enables the agent (policy model) to autonomously summarize and manage their memory during interaction with environment. By improving the credit assignment mechanism based on memory effectiveness, the policy model can selectively retain crucial information, significantly reducing token consumption while preserving task performance. Extensive experiments and analyses confirm that MemPO achieves absolute F1 score gains of 25.98% over the base model and 7.1% over the previous SOTA baseline, while reducing token usage by 67.58% and 73.12%. The code is released at https://github.com/TheNewBeeKing/MemPO.

Ruoran Li, Xinghua Zhang, Haiyang Yu, Shitong Duan, Xiang Li, Wenxin Xiang, Chonghua Liao, Xudong Guo, Yongbin Li, Jinli Suo• 2026

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringMuSiQue
EM22.1
185
Single-hop Question AnsweringPopQA
EM46.7
104
Single-hop Question AnsweringTriviaQA
EM57.1
81
Multi-hop QAHotpotQA
Exact Match42.9
76
Multi-objective searchLocal Wiki Search
TT0.32
42
Single-hop Question AnsweringSingle-hop QA Average
F1 Score59.61
35
Multi-objective searchOnline Web Search
TT0.19
24
Multi-hop QA2WikiMultihopQA
F1 Score59.17
23
Multi-objective taskLocal Wiki Search
F1 (2-objective)56.47
7
Multi-hop QABamboogle
F1 Score52.9
5
Showing 10 of 14 rows

Other info

Follow for update