MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
About
Despite improvements by length extrapolation, efficient attention and memory modules, handling infinitely long documents with linear complexity without performance degradation during extrapolation remains the ultimate challenge in long-text processing. We directly optimize for long-text tasks in an end-to-end fashion and introduce a novel agent workflow, MemAgent, which reads text in segments and updates the memory using an overwrite strategy. We extend the DAPO algorithm to facilitate training via independent-context multi-conversation generation. MemAgent has demonstrated superb long-context capabilities, being able to extrapolate from an 8K context trained on 32K text to a 3.5M QA task with performance loss < 5% and achieves 95%+ in 512K RULER test.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-context Question Answering | LongBench (test) | HotpotQA63.8 | 59 | |
| Long-context Question Answering | 2WikiMultiHopQA (Out-Of-Distribution) | Accuracy61.7 | 54 | |
| Video Question Answering | LVBench | Accuracy22.2 | 50 | |
| Accurate Retrieval | Accurate Retrieval (AR) suite | Convo Score602.7 | 36 | |
| Test-Time Learning | Test-Time Learning (TTL) suite | Bank77 Accuracy26 | 36 | |
| Document Visual Question Answering | SlideVQA | Accuracy0.453 | 30 | |
| General Capability | 8 capability benchmarks Aggregate | Average Capability57.77 | 26 | |
| Question Answering | HotpotQA 10K context | Accuracy82.3 | 19 | |
| Question Answering | NQ 10K context | Accuracy53.4 | 19 | |
| Question Answering | Average 10K context | Accuracy67.8 | 19 |