Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Shrinking the Variance: Shrinkage Baselines for Reinforcement Learning with Verifiable Rewards

About

Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for post-training large reasoning models (LRMs) using policy-gradient methods such as GRPO. To stabilize training, these methods typically center trajectory rewards by subtracting the empirical mean reward for each prompt. Statistically, this centering acts as a control variate (baseline), reducing the variance of the policy-gradient estimator. In practice, the mean reward is estimated using per-prompt empirical averages computed from the generations for each prompt in a batch. Motivated by Stein's paradox, we propose shrinkage estimators that combine per-prompt and across-prompt means to improve per-prompt mean estimation accuracy, especially in the low-generation regime typical of RLVR. Theoretically, we construct a shrinkage-based baseline that provably yields lower-variance policy-gradient estimators across algorithms. Our baseline is a drop-in replacement for standard per-prompt mean baselines and requires no additional hyperparameters or computation. Empirically, shrinkage baselines consistently outperform empirical-mean baselines, producing lower-variance gradient updates and improved training stability.

Guanning Zeng, Zhaoyi Zhou, Daman Arora, Andrea Zanette• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy58.93
797
Preference ModelingArena-Hard v2
Win Rate7.4
9
Human Preference EvaluationArena Hard v0.1
Win Rate56.7
3
Human Preference EvaluationArena Creative Writing
Win Rate23.4
3
Showing 4 of 4 rows

Other info

Follow for update