Experience-Evolving Multi-Turn Tool-Use Agent with Hybrid Episodic-Procedural Memory
About
As intents unfold and environments change, multi-turn agents face continuously shifting decision contexts. Although reusing past experience is intuitively appealing, existing approaches remain limited: full trajectories are often too context-specific to transfer, while tool-level reuse ignores the surrounding context and environment. In this paper, we introduce a hybrid episodic-procedural memory strategy (H-EPM) that enables experience-induced self-evolution of multi-turn tool-use policies by adaptively reusing partially overlapping successful experiences during both inference and training. Inspired by human episodic-procedural integration, we construct a tool graph from accumulated trajectories, where recurring tool-to-tool dependencies capture procedural routines and each edge is augmented with compact episodic summaries of relevant context. At inference time, the agent dynamically balances episodic recall for contextual reasoning with procedural execution for routine steps. Beyond inference, H-EPM introduces a memory-guided reinforcement learning paradigm that directly addresses a core challenge in multi-turn agent reinforcement learning, namely ineffective exploration over long trajectories. By biasing exploration toward historically successful tool transitions, H-EPM learns a stronger policy that generalizes at inference time without relying on domain-specific experience collection. Experiments show that H-EPM consistently delivers substantial inference-time gains over strong baselines across multi-turn tool-use benchmarks, reaching improvements of up to fifty percent. It also improves reinforcement learning policy performance, achieving gains of up to forty percent on out-of-distribution tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Agent Task Completion | τ-Bench (test) | Average Task Reward0.791 | 27 | |
| Agent Task Completion | τ2-BENCH (test) | Average Task Reward0.921 | 27 | |
| Agent Task Completion | ToolSandbox (test) | Avg Task Reward0.704 | 27 | |
| Multi-turn agent task | ACEBench multi-turn (test) | Process Accuracy76.5 | 15 | |
| Multi-turn agent decision making | tau-Bench (test) | Success Rate55.8 | 7 | |
| Multi-turn agent decision making | tau2-Bench (test) | Success Rate22.3 | 7 | |
| Multi-turn agent decision making | ToolSandbox (test) | Success Rate52.2 | 7 | |
| Agent Task Completion | ∞Bench | Avg Task Reward92.1 | 2 | |
| Agent Task Completion | ToolSandbox | Average Task Reward0.67 | 2 |