Alternating Reinforcement Learning for Rubric-Based Reward Modeling in Non-Verifiable LLM Post-Training
About
Standard reward models typically predict scalar scores that fail to capture the multifaceted nature of response quality in non-verifiable domains, such as creative writing or open-ended instruction following. To address this limitation, we propose Rubric-ARM, a framework that jointly optimizes a rubric generator and a judge using reinforcement learning from preference feedback. Unlike existing methods that rely on static rubrics or disjoint training pipelines, our approach treats rubric generation as a latent action learned to maximize judgment accuracy. We introduce an alternating optimization strategy to mitigate the non-stationarity of simultaneous updates, providing theoretical analysis that demonstrates how this schedule reduces gradient variance during training. Extensive experiments show that Rubric-ARM achieves state-of-the-art performance among baselines on multiple benchmarks and significantly improves downstream policy alignment in both offline and online reinforcement learning settings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reward Modeling | RewardBench Focus 2 | Accuracy90.3 | 82 | |
| Reward Modeling | RewardBench Precise IF 2 | Accuracy46.2 | 70 | |
| Reward Modeling | HelpSteer 3 | Accuracy71.1 | 39 | |
| Reward Modeling | RM-Bench Chat Hard | Accuracy80.7 | 34 | |
| Reward Modeling | PPE-IFEval | Accuracy0.72 | 18 | |
| Reward Modeling | RM-Bench Chat | Accuracy69.2 | 18 | |
| Reward Modeling | RewardBench Chat | Accuracy90.3 | 18 | |
| Reward Modeling | InfoBench | Accuracy87.7 | 17 | |
| Reward Modeling | FollowBench | Accuracy87.4 | 17 | |
| Reward Modeling | IFBench | Accuracy67.1 | 17 |