Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Optimal Token Baseline: Variance Reduction for Long-Horizon LLM-RL

About

Reinforcement Learning (RL) for Large Language Models (LLMs) often suffers from training collapse in long-horizon tasks due to exploding gradient variance. To mitigate this, a baseline is commonly introduced for advantage computation; however, traditional value models remain difficult to optimize, and standard group-based baselines overlook sequence heterogeneity. Although classic optimal baseline theory can achieve global variance reduction, it neglects token heterogeneity and requires prohibitive gradient-based computation. In this work, we derive the Optimal Token Baseline (OTB) from first principles, proving that gradient updates should be weighted inversely to their cumulative gradient norm. To ensure efficiency, we propose the Logit-Gradient Proxy that approximates the gradient norm using only forward-pass probabilities. Our method achieves training stability and matches the performance of large group sizes ($N=32$) with only $N=4$, reducing token consumption by over 65% across single-turn and tool-integrated reasoning tasks.

Yingru Li, Jiawei Xu, Ziniu Li, Jiacai Liu, Wei Liu, Yuxuan Tong, Longtao Zheng, Zhenghai Xue, Yaxiang Zhang, Tianle Cai, Ge Zhang, Qian Liu, Baoxiang Wang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 24
Avg@32 Accuracy37.29
23
Multi-Turn Tool-Integrated Reasoning (TIR)AIME25
Peak avg@32 Score28.13
6
Multi-Turn Tool-Integrated Reasoning (TIR)AIME24
Peak avg@32 score41.46
6
Multi-Turn Tool-Integrated Reasoning (TIR)AMC23
Peak avg@32 Score79.45
6
Multi-Turn Tool-Integrated Reasoning (TIR)MATH500
Peak avg@32 Score84.69
6
Single-Turn Mathematical ReasoningAIME 25
Peak avg@32 Score30.31
5
Single-Turn Mathematical ReasoningAMC 23
Peak avg@32 Score85.08
5
Single-Turn Mathematical ReasoningMATH500
Peak Avg Score93.43
5
Showing 8 of 8 rows

Other info

Follow for update