Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TreeRL: LLM Reinforcement Learning with On-Policy Tree Search

About

Reinforcement learning (RL) with tree search has demonstrated superior performance in traditional reasoning tasks. Compared to conventional independent chain sampling strategies with outcome supervision, tree search enables better exploration of the reasoning space and provides dense, on-policy process rewards during RL training but remains under-explored in On-Policy LLM RL. We propose TreeRL, a reinforcement learning framework that directly incorporates on-policy tree search for RL training. Our approach includes intermediate supervision and eliminates the need for a separate reward model training. Existing approaches typically train a separate process reward model, which can suffer from distribution mismatch and reward hacking. We also introduce a cost-effective tree search approach that achieves higher search efficiency under the same generation token budget by strategically branching from high-uncertainty intermediate steps rather than using random branching. Experiments on challenging math and code reasoning benchmarks demonstrate that TreeRL achieves superior performance compared to traditional ChainRL, highlighting the potential of tree search for LLM. TreeRL is open-sourced at https://github.com/THUDM/TreeRL.

Zhenyu Hou, Ziniu Hu, Yujiang Li, Rui Lu, Jie Tang, Yuxiao Dong• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy65.5
797
Multi-hop Question Answering2WikiMQA
F1 Score69.5
154
Multi-hop Question AnsweringMuSiQue--
106
Single-hop Question AnsweringTriviaQA--
62
Single-hop Question AnsweringPopQA--
55
Multi-hop Question AnsweringHotpotQA
F1 Score61.1
31
Multi-hop Question AnsweringBamboogle
F157.7
25
Question AnsweringKnowledge-Intensive Question Answering Benchmarks Aggregate
F157.6
15
Showing 8 of 8 rows

Other info

Follow for update