Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SEARL: Joint Optimization of Policy and Tool Graph Memory for Self-Evolving Agents

About

Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) have demonstrated significant potential in single-turn reasoning tasks. With the paradigm shift toward self-evolving agentic learning, models are increasingly expected to learn from trajectories by synthesizing tools or accumulating explicit experiences. However, prevailing methods typically rely on large-scale LLMs or multi-agent frameworks, which hinder their deployment in resource-constrained environments. The inherent sparsity of outcome-based rewards also poses a substantial challenge, as agents typically receive feedback only upon completion of tasks. To address these limitations, we introduce a Tool-Memory based self-evolving agentic framework SEARL. Unlike approaches that directly utilize interaction experiences, our method constructs a structured experience memory that integrates planning with execution. This provides a novel state abstraction that facilitates generalization across analogous contexts, such as tool reuse. Consequently, agents extract explicit knowledge from historical data while leveraging inter-trajectory correlations to densify reward signals. We evaluate our framework on knowledge reasoning and mathematics tasks, demonstrating its effectiveness in achieving more practical and efficient learning.

Xinshun Feng, Xinhao Song, Lijun Li, Gongshen Liu, Jing Shao• 2026

Related benchmarks

TaskDatasetResultRank
Multi-hop QA2Wiki
pass@136
6
Multi-hop QABamboogle
Pass@130.4
6
Mathematical ReasoningMATH 500
Pass@1 Rate68.2
6
Mathematical ReasoningAIME 24
Pass@133.33
6
Multi-hop QAHotpotQA
pass@133.5
6
Mathematical ReasoningGSM8K
Pass@186.2
6
Showing 6 of 6 rows

Other info

Follow for update