SEARL: Joint Optimization of Policy and Tool Graph Memory for Self-Evolving Agents
About
Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) have demonstrated significant potential in single-turn reasoning tasks. With the paradigm shift toward self-evolving agentic learning, models are increasingly expected to learn from trajectories by synthesizing tools or accumulating explicit experiences. However, prevailing methods typically rely on large-scale LLMs or multi-agent frameworks, which hinder their deployment in resource-constrained environments. The inherent sparsity of outcome-based rewards also poses a substantial challenge, as agents typically receive feedback only upon completion of tasks. To address these limitations, we introduce a Tool-Memory based self-evolving agentic framework SEARL. Unlike approaches that directly utilize interaction experiences, our method constructs a structured experience memory that integrates planning with execution. This provides a novel state abstraction that facilitates generalization across analogous contexts, such as tool reuse. Consequently, agents extract explicit knowledge from historical data while leveraging inter-trajectory correlations to densify reward signals. We evaluate our framework on knowledge reasoning and mathematics tasks, demonstrating its effectiveness in achieving more practical and efficient learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-hop QA | 2Wiki | pass@136 | 6 | |
| Multi-hop QA | Bamboogle | Pass@130.4 | 6 | |
| Mathematical Reasoning | MATH 500 | Pass@1 Rate68.2 | 6 | |
| Mathematical Reasoning | AIME 24 | Pass@133.33 | 6 | |
| Multi-hop QA | HotpotQA | pass@133.5 | 6 | |
| Mathematical Reasoning | GSM8K | Pass@186.2 | 6 |