Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Tool Verification for Test-Time Reinforcement Learning

About

Test-time reinforcement learning (TTRL) has emerged as a promising paradigm for self-evolving large reasoning models (LRMs), enabling online adaptation on unlabeled test inputs via self-induced rewards through majority voting. However, a spurious yet high-frequency unverified consensus can become a biased and reinforced reward signal, leading to incorrect mode collapse. We address this failure mode with T^3RL (Tool-Verification for Test-Time Reinforcement Learning), which introduces test-time tool verification into reward estimation. Concretely, a verifier uses an external tool as evidence (e.g., from code execution) to upweight verified rollouts in a verification-aware voting, producing more reliable pseudo-labels for training. Across various math difficulties (MATH-500, AMC, and AIME 2024) and diverse backbone types, T^3RL significantly improves over TTRL, with larger gains on harder problems. More broadly, T^3RL can be viewed as verified online data synthesis, highlighting test-time tool verification as a key mechanism for stabilizing self-evolution.

Ruotong Liao, Nikolai R\"ohrich, Xiaohan Wang, Yuhui Zhang, Yasaman Samadzadeh, Volker Tresp, Serena Yeung-Levy• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical Problem SolvingAIME 2024
Pass@140
21
Mathematical Problem SolvingAMC
Pass@174.2
15
Mathematical Problem SolvingMATH 500
Pass@189.5
15
Showing 3 of 3 rows

Other info

Follow for update