Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TTRL: Test-Time Reinforcement Learning

About

This paper investigates Reinforcement Learning (RL) on data without explicit labels for reasoning tasks in Large Language Models (LLMs). The core challenge of the problem is reward estimation during inference while not having access to ground-truth information. While this setting appears elusive, we find that common practices in Test-Time Scaling (TTS), such as majority voting, yield surprisingly effective rewards suitable for driving RL training. In this work, we introduce Test-Time Reinforcement Learning (TTRL), a novel method for training LLMs using RL on unlabeled data. TTRL enables self-evolution of LLMs by utilizing the priors in the pre-trained models. Our experiments demonstrate that TTRL consistently improves performance across a variety of tasks and models. Notably, TTRL boosts the pass@1 performance of Qwen-2.5-Math-7B by approximately 211% on the AIME 2024 with only unlabeled test data. Furthermore, although TTRL is only supervised by the maj@n metric, TTRL has demonstrated performance to consistently surpass the upper limit of the initial model maj@n, and approach the performance of models trained directly on test data with ground-truth labels. Our experimental findings validate the general effectiveness of TTRL across various tasks and highlight TTRL's potential for broader tasks and domains. GitHub: https://github.com/PRIME-RL/TTRL

Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, Biqing Qi, Youbang Sun, Zhiyuan Ma, Lifan Yuan, Ning Ding, Bowen Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2024
Accuracy24
251
Mathematical ReasoningMATH 500--
155
Mathematical ReasoningAMC
Accuracy52.9
151
Mathematical ReasoningAMC
Pass@168.55
112
Mathematical ReasoningAIME 2025
Pass@127.4
96
Mathematical ReasoningAIME 2024
Pass@147.13
86
General ReasoningMMLU-Pro
Accuracy46.9
48
Logic reasoningZebraLogic
Score2.1
42
CodingHumanEval
HumanEval Mean Score0.506
28
Knowledge ReasoningMMLU-Pro--
27
Showing 10 of 29 rows

Other info

Follow for update