Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning

About

As Large Language Models (LLMs) are increasingly used for long-duration tasks, maintaining effective long-term memory has become a critical challenge. Current methods often face a trade-off between cost and accuracy. Simple storage methods often fail to retrieve relevant information, while complex indexing methods (such as memory graphs) require heavy computation and can cause information loss. Furthermore, relying on the working LLM to process all memories is computationally expensive and slow. To address these limitations, we propose MemSifter, a novel framework that offloads the memory retrieval process to a small-scale proxy model. Instead of increasing the burden on the primary working LLM, MemSifter uses a smaller model to reason about the task before retrieving the necessary information. This approach requires no heavy computation during the indexing phase and adds minimal overhead during inference. To optimize the proxy model, we introduce a memory-specific Reinforcement Learning (RL) training paradigm. We design a task-outcome-oriented reward based on the working LLM's actual performance in completing the task. The reward measures the actual contribution of retrieved memories by mutiple interactions with the working LLM, and discriminates retrieved rankings by stepped decreasing contributions. Additionally, we employ training techniques such as Curriculum Learning and Model Merging to improve performance. We evaluated MemSifter on eight LLM memory benchmarks, including Deep Research tasks. The results demonstrate that our method meets or exceeds the performance of existing state-of-the-art approaches in both retrieval accuracy and final task completion. MemSifter offers an efficient and scalable solution for long-term LLM memory. We have open-sourced the model weights, code, and training data to support further research.

Jiejun Tan, Zhicheng Dou, Liancheng Zhang, Yuyang Hu, Yiruo Cheng, Ji-Rong Wen• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Memory Retrieval and ReasoningLoCoMo 32K
F1 Score46.39
20
Long-context Memory Retrieval and ReasoningLongMemEval 128K
F1 Score47.26
20
Long-context Memory Retrieval and ReasoningLongMemEval 1M
F1 Score49.58
20
Long-context Memory Retrieval and ReasoningPersonaMem 32K
F1 Score26.45
20
Long-context Memory Retrieval and ReasoningPersonaMem 128K
F1 Score23.75
20
Long-context Memory Retrieval and ReasoningHotpotQA 128K
F1 Score24.95
20
Long-context Memory Retrieval and ReasoningWebWalker 128K
F1 Score27.44
20
Long-context Memory Retrieval and ReasoningWebDancer 128K
F1 Score38.21
20
Long-context Memory Retrieval and ReasoningZH4O 128K
F1 Score50.91
20
Long-context Memory Retrieval and ReasoningPerM 128K V2
F1 Score23.57
20
Showing 10 of 16 rows

Other info

GitHub

Follow for update