Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rubrics to Tokens: Bridging Response-level Rubrics and Token-level Rewards in Instruction Following Tasks

About

Rubric-based Reinforcement Learning (RL) has emerged as a promising approach for aligning Large Language Models (LLMs) with complex, open-domain instruction following tasks. However, existing methods predominantly rely on response-level rewards, introducing severe reward sparsity and reward ambiguity problems. To address these issues, we propose Rubrics to Tokens (RTT), a novel rubric-based RL framework that bridges coarse response-level scores and fine-grained token-level credit assignment. RTT introduces a Token-Level Relevance Discriminator to predict which tokens in the response are responsible for a specific constraint, and optimizes the policy model via RTT-GRPO, which integrates response-level and token-level advantages within a unified framework. Furthermore, when transitioning from one-dimensional, outcome-level reward to three-dimensional reward space in the token-level rubric-based RL, we propose a novel group normalization method, called Intra-sample Token Group Normalization, to accommodate this shift. Extensive experiments and benchmarks demonstrate that RTT consistently outperforms other baselines in both instruction- and rubric-level accuracy across different models.

Tianze Xu, Yanzhao Zheng, Pengrui Lu, Lyumanshan Ye, Yong Wu, Zhentao Zhang, Yuanqiang Yu, Chao Ma, Jihuai Zhu, Pengfei Liu, Baohua Dong, Hangcheng Zhu, Ruohui Huang, Gang Yu• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Science ReasoningGPQA
Accuracy52.01
243
Multitask Language UnderstandingMMLU-Pro
Accuracy66.76
118
Instruction FollowingAdvancedIF
Accuracy48.39
102
Instruction FollowingMulDimIF
Score76.75
36
Mathematical ReasoningMATH500
Accuracy (%)91.8
29
Instruction FollowingIFBench
Prompt-level Accuracy34.69
21
Showing 7 of 7 rows

Other info

Follow for update