Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LatentMem: Customizing Latent Memory for Multi-Agent Systems

About

Large language model (LLM)-powered multi-agent systems (MAS) demonstrate remarkable collective intelligence, wherein multi-agent memory serves as a pivotal mechanism for continual adaptation. However, existing multi-agent memory designs remain constrained by two fundamental bottlenecks: (i) memory homogenization arising from the absence of role-aware customization, and (ii) information overload induced by excessively fine-grained memory entries. To address these limitations, we propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner. Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts. Further, we introduce Latent Memory Policy Optimization (LMPO), which propagates task-level optimization signals through latent memories to the composer, encouraging it to produce compact and high-utility representations. Extensive experiments across diverse benchmarks and mainstream MAS frameworks show that LatentMem achieves a performance gain of up to $19.36$% over vanilla settings and consistently outperforms existing memory architectures, without requiring any modifications to the underlying frameworks.

Muxin Fu, Guibin Zhang, Xiangyuan Xue, Yafu Li, Zefeng He, Siyuan Huang, Xiaoye Qu, Yu Cheng, Yang Yang• 2026

Related benchmarks

TaskDatasetResultRank
Automated PlanningPDDL
Accuracy28.96
233
Question AnsweringPopQA
Accuracy50.16
186
Question AnsweringStrategyQA
Accuracy67.89
114
Question AnsweringTriviaQA
Accuracy74.92
85
Code GenerationBigCodeBench
Accuracy83.84
59
Code GenerationKodCode
Accuracy65.9
38
Showing 6 of 6 rows

Other info

GitHub

Follow for update