Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ToolRL: Reward is All Tool Learning Needs

About

Current Large Language Models (LLMs) often undergo supervised fine-tuning (SFT) to acquire tool use capabilities. However, SFT struggles to generalize to unfamiliar or complex tool use scenarios. Recent advancements in reinforcement learning (RL), particularly with R1-like models, have demonstrated promising reasoning and generalization abilities. Yet, reward design for tool use presents unique challenges: multiple tools may be invoked with diverse parameters, and coarse-grained reward signals, such as answer matching, fail to offer the finegrained feedback required for effective learning. In this work, we present the first comprehensive study on reward design for tool selection and application tasks within the RL paradigm. We systematically explore a wide range of reward strategies, analyzing their types, scales, granularity, and temporal dynamics. Building on these insights, we propose a principled reward design tailored for tool use tasks and apply it to train LLMs using Group Relative Policy Optimization (GRPO). Empirical evaluations across diverse benchmarks demonstrate that our approach yields robust, scalable, and stable training, achieving a 17% improvement over base models and a 15% gain over SFT models. These results highlight the critical role of thoughtful reward design in enhancing the tool use capabilities and generalization performance of LLMs. All the codes are released to facilitate future research.

Cheng Qian, Emre Can Acikgoz, Qi He, Hongru Wang, Xiusi Chen, Dilek Hakkani-T\"ur, Gokhan Tur, Heng Ji• 2025

Related benchmarks

TaskDatasetResultRank
Function CallingBFCL V3
Overall Accuracy68.5
88
Function CallingBFCL (Berkeley Function Calling Leaderboard)
Base Score0.5
28
General Capability8 capability benchmarks Aggregate
Average Capability54.16
26
Multi-hop tool useToolHop
Answer Correctness42.55
16
Tool-use interaction evaluationFTRL
Solve P30.9
16
CodingLiveCodeBench
Accuracy26.76
16
Tool UseTool Use Live
Para Score56.25
15
Tool UseTool Use Non-Live
Para0.91
15
CodingLiveBench
Accuracy31.84
15
MemoryRULER HotpotQA
Score (7K)58.59
15
Showing 10 of 13 rows

Other info

Follow for update