Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distribution-Aware Reward Estimation for Test-Time Reinforcement Learning

About

Test-time reinforcement learning (TTRL) enables large language models (LLMs) to self-improve on unlabeled inputs, but its effectiveness critically depends on how reward signals are estimated without ground-truth supervision. Most existing TTRL methods rely on majority voting (MV) over rollouts to produce deterministic rewards, implicitly assuming that the majority rollout provides a reliable learning signal. We show that this assumption is fragile: MV reduces the rollout distribution into a single outcome, discarding information about non-majority but correct actions candidates, and yields systematically biased reward estimates. To address this, we propose Distribution-AwareReward Estimation (DARE), which shifts reward estimation from a single majority outcome to the full empirical rollout distribution. DARE further augments this distribution-based reward with an exploration bonus and a distribution pruning mechanism for non-majority rollout exploration and reward denoise, yielding a more informative and robust reward estimation. Extensive experiments on challenging reasoning benchmarks show that DARE improves optimization stability and final performance over recent baselines, achieving relative improvements of 25.3% on challenging AIME 2024 and 5.3% on AMC.

Bodong Du, Xuanqi Huang, Xiaomeng Li• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2024
Accuracy26.3
251
Mathematical ReasoningAMC
Accuracy55.7
151
General ReasoningMMLU-Pro
Accuracy48.8
48
Scientific ReasoningGPQA
Mean@132.7
22
Showing 4 of 4 rows

Other info

Follow for update