OpenRubrics: Towards Scalable Synthetic Rubric Generation for Reward Modeling and LLM Alignment
About
Reward modeling lies at the core of reinforcement learning from human feedback (RLHF), yet most existing reward models rely on scalar or pairwise judgments that fail to capture the multifaceted nature of human preferences. Recent studies have explored rubrics-as-rewards (RaR) that uses structured criteria to capture multiple dimensions of response quality. However, producing rubrics that are both reliable and scalable remains a key challenge. In this work, we introduce OpenRubrics, a diverse, large-scale collection of (prompt, rubric) pairs for training rubric-generation and rubric-based reward models. To elicit discriminative and comprehensive evaluation signals, we introduce Contrastive Rubric Generation (CRG), which derives both hard rules (explicit constraints) and principles (implicit qualities) by contrasting preferred and rejected responses. We further remove noisy rubrics via preserving preference-label consistency. Across multiple reward-modeling benchmarks, our rubric-based reward model, Rubric-RM, surpasses strong size-matched baselines by 8.4%. These gains transfer to policy models on instruction-following and biomedical benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reward Modeling | RewardBench Focus 2 | Accuracy86.5 | 82 | |
| Reward Modeling | RewardBench Precise IF 2 | Accuracy40 | 70 | |
| Reward Modeling | HelpSteer 3 | Accuracy67.5 | 39 | |
| Reward Modeling | RM-Bench Chat Hard | Accuracy75.4 | 34 | |
| Reward Modeling | PPE-IFEval | Accuracy0.708 | 18 | |
| Reward Modeling | RewardBench Chat | Accuracy89.9 | 18 | |
| Reward Modeling | RM-Bench Chat | Accuracy67 | 18 | |
| Reward Modeling | IFBench | Accuracy67.1 | 17 | |
| Reward Modeling | InfoBench | Accuracy83.8 | 17 | |
| Reward Modeling | FollowBench | Accuracy81.5 | 17 |