Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning

About

Neural tree search is a powerful decision-making algorithm widely used in complex domains such as game playing and model-based reinforcement learning. Recent work has applied AlphaZero-style tree search to enhance the reasoning capabilities of Large Language Models (LLMs) during inference, but we find that this approach suffers from a scaling failure: on GSM8K and Game24, accuracy drops as the search budget increases. In this paper, we present ReSCALE, an adaptation of Gumbel AlphaZero MCTS that replaces Dirichlet noise and PUCT selection with Gumbel sampling and Sequential Halving, restoring monotonic scaling without changes to the model or its training. ReSCALE reaches 58.4\% on GSM8K and 85.3\% on Game24 at budgets where the baseline degrades. Ablations confirm that Sequential Halving is the primary driver of the improvement.

Leonid Ugadiarov, Yuri Kuratov, Aleksandr Panov, Alexey Skrynnik• 2026

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningGame of 24
Performance85.3
11
Mathematical ReasoningGSM8K
Accuracy58.4
7
Showing 2 of 2 rows

Other info

Follow for update