Reinforcing Chain-of-Thought Reasoning with Self-Evolving Rubrics
About
Despite chain-of-thought (CoT) playing crucial roles in LLM reasoning, directly rewarding it is difficult: training a reward model demands heavy human labeling efforts, and static RMs struggle with evolving CoT distributions and reward hacking. These challenges motivate us to seek an autonomous CoT rewarding approach that requires no human annotation efforts and can evolve gradually. Inspired by recent self-evolving training methods, we propose \textbf{RLCER} (\textbf{R}einforcement \textbf{L}earning with \textbf{C}oT Supervision via Self-\textbf{E}volving \textbf{R}ubrics), which enhances the outcome-centric RLVR by rewarding CoTs with self-proposed and self-evolving rubrics. We show that self-proposed and self-evolving rubrics provide reliable CoT supervision signals even without outcome rewards, enabling RLCER to outperform outcome-centric RLVR. Moreover, when used as in-prompt hints, these self-proposed rubrics further improve inference-time performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AIME 2024 | Accuracy37.5 | 251 | |
| Mathematical Reasoning | AIME 2025 | Accuracy33.33 | 227 | |
| Mathematical Reasoning | AMC 2023 | Accuracy86.41 | 65 | |
| Scientific Reasoning | GPQA Diamond | Score48.77 | 28 | |
| Scientific Reasoning | SuperGPQA Eng | Accuracy45 | 8 | |
| Scientific Reasoning | SuperGPQA Sci | Accuracy50.25 | 8 | |
| Medical Reasoning | SuperGPQA Med | Accuracy0.365 | 8 |