Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rubrics as Rewards: Reinforcement Learning Beyond Verifiable Domains

About

Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective for complex reasoning tasks with clear correctness signals such as math and coding. However, extending it to real-world reasoning tasks is challenging, as evaluation depends on nuanced, multi-criteria judgments rather than binary correctness. Instance-specific rubrics have recently been used in evaluation benchmarks to capture such judgments, but their potential as reward signals for on-policy post-training remains underexplored. We introduce $\textbf{Rubrics as Rewards}$ (RaR), an on-policy reinforcement learning method that extends RLVR beyond verifiable domains by using rubric-based feedback. Across both medical and science domains, we evaluate multiple strategies for aggregating rubric feedback into rewards. The best RaR variant achieves relative improvements of up to $31\%$ on HealthBench and $7\%$ on GPQA-Diamond over popular LLM-as-judge baselines that rely on direct Likert-based rewards. These results demonstrate that RaR-trained policies adapt well to diverse evaluation formats, performing strongly on both rubric-based and multiple-choice tasks. Moreover, we find that using rubrics as structured reward signals yields better alignment for smaller judges and reduces performance variance across judge scales.

Anisha Gunjal, Anthony Wang, Elaine Lau, Vaskar Nath, Yunzhong He, Bing Liu, Sean Hendryx• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
IFEval Accuracy85.95
625
Instruction FollowingAlpacaEval 2.0
Win Rate65.34
507
General KnowledgeMMLU
MMLU General Knowledge Accuracy69.5
234
Mathematical Problem SolvingMATH
Accuracy51.2
229
CodeHumanEval
HumanEval Accuracy70.9
79
Scientific ReasoningGPQA Diamond
Score45.96
68
Image Virtual Try-onVITON-HD
LPIPS0.055
41
Multi-turn conversationMT-Bench
Conversation Rating (1-10)8.4
41
Instruction FollowingFollowBench--
39
DialogueMT-Bench
MT-Bench Score7.862
29
Showing 10 of 28 rows

Other info

Follow for update