Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SYNAPSE: Empowering LLM Agents with Episodic-Semantic Memory via Spreading Activation

About

While Large Language Models (LLMs) excel at generalized reasoning, standard retrieval-augmented approaches fail to address the disconnected nature of long-term agentic memory. To bridge this gap, we introduce Synapse (Synergistic Associative Processing Semantic Encoding), a unified memory architecture that transcends static vector similarity. Drawing from cognitive science, Synapse models memory as a dynamic graph where relevance emerges from spreading activation rather than pre-computed links. By integrating lateral inhibition and temporal decay, the system dynamically highlights relevant sub-graphs while filtering interference. We implement a Triple Hybrid Retrieval strategy that fuses geometric embeddings with activation-based graph traversal. Comprehensive evaluations on the LoCoMo benchmark show that Synapse significantly outperforms state-of-the-art methods in complex temporal and multi-hop reasoning tasks, offering a robust solution to the "Contextual Tunneling" problem. Our code and data will be made publicly available upon acceptance.

Hanqi Jiang, Junhao Chen, Yi Pan, Ling Chen, Weihang You, Yifan Zhou, Ruidong Zhang, Andrea Sikora, Lin Zhao, Yohannes Abate, Tianming Liu• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringLocomo
Average F140.5
64
Long-context Question AnsweringLocomo
Single-Hop LLJ Score81.5
24
LLM Agent Memory RetrievalLoCoMo v1 (full)
F1 (Multi-Hop)35.7
12
Showing 3 of 3 rows

Other info

Follow for update