Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent

About

Despite improvements by length extrapolation, efficient attention and memory modules, handling infinitely long documents with linear complexity without performance degradation during extrapolation remains the ultimate challenge in long-text processing. We directly optimize for long-text tasks in an end-to-end fashion and introduce a novel agent workflow, MemAgent, which reads text in segments and updates the memory using an overwrite strategy. We extend the DAPO algorithm to facilitate training via independent-context multi-conversation generation. MemAgent has demonstrated superb long-context capabilities, being able to extrapolate from an 8K context trained on 32K text to a 3.5M QA task with performance loss < 5% and achieves 95%+ in 512K RULER test.

Hongli Yu, Tinghong Chen, Jiangtao Feng, Jiangjie Chen, Weinan Dai, Qiying Yu, Ya-Qin Zhang, Wei-Ying Ma, Jingjing Liu, Mingxuan Wang, Hao Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringLongBench (test)
HotpotQA63.8
59
Long-context Question Answering2WikiMultiHopQA (Out-Of-Distribution)
Accuracy61.7
54
Video Question AnsweringLVBench
Accuracy22.2
50
Accurate RetrievalAccurate Retrieval (AR) suite
Convo Score602.7
36
Test-Time LearningTest-Time Learning (TTL) suite
Bank77 Accuracy26
36
Document Visual Question AnsweringSlideVQA
Accuracy0.453
30
General Capability8 capability benchmarks Aggregate
Average Capability57.77
26
Question AnsweringHotpotQA 10K context
Accuracy82.3
19
Question AnsweringNQ 10K context
Accuracy53.4
19
Question AnsweringAverage 10K context
Accuracy67.8
19
Showing 10 of 62 rows

Other info

Follow for update