Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning from the Irrecoverable: Error-Localized Policy Optimization for Tool-Integrated LLM Reasoning

About

Tool-integrated reasoning (TIR) enables LLM agents to solve tasks through planning, tool use, and iterative revision, but outcome-only reinforcement learning in this setting suffers from sparse, delayed rewards and weak step-level credit assignment. In long-horizon TIR trajectories, an early irrecoverable mistake can determine success or failure, making it crucial to localize the first irrecoverable step and leverage it for fine-grained credit assignment. We propose Error-Localized Policy Optimization (ELPO), which localizes the first irrecoverable step via binary-search rollout trees under a fixed rollout budget, converts the resulting tree into stable learning signals through hierarchical advantage attribution, and applies error-localized adaptive clipping to strengthen corrective updates on the critical step and its suffix. Across TIR benchmarks in math, science QA, and code execution, ELPO consistently outperforms strong Agentic RL baselines under comparable sampling budgets, with additional gains in Pass@K and Major@K scaling, rollout ranking quality, and tool-call efficiency. Our code will be publicly released soon.

Qiao Liang, Yuke Zhu, Chao Ge, Lei Yang, Ying Shen, Bo Zheng, Sheng Guo• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy92.8
535
Mathematical ReasoningAIME 25
Accuracy69.4
201
Scientific Question AnsweringGPQA Diamond
Accuracy59.1
64
CodingLiveCodeBench
Task Accuracy28.6
23
Mathematical ReasoningAIME 24
Accuracy74.3
17
Showing 5 of 5 rows

Other info

Follow for update