Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reinforcing Chain-of-Thought Reasoning with Self-Evolving Rubrics

About

Despite chain-of-thought (CoT) playing crucial roles in LLM reasoning, directly rewarding it is difficult: training a reward model demands heavy human labeling efforts, and static RMs struggle with evolving CoT distributions and reward hacking. These challenges motivate us to seek an autonomous CoT rewarding approach that requires no human annotation efforts and can evolve gradually. Inspired by recent self-evolving training methods, we propose \textbf{RLCER} (\textbf{R}einforcement \textbf{L}earning with \textbf{C}oT Supervision via Self-\textbf{E}volving \textbf{R}ubrics), which enhances the outcome-centric RLVR by rewarding CoTs with self-proposed and self-evolving rubrics. We show that self-proposed and self-evolving rubrics provide reliable CoT supervision signals even without outcome rewards, enabling RLCER to outperform outcome-centric RLVR. Moreover, when used as in-prompt hints, these self-proposed rubrics further improve inference-time performance.

Leheng Sheng, Wenchang Ma, Ruixin Hong, Xiang Wang, An Zhang, Tat-Seng Chua• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2024
Accuracy37.5
251
Mathematical ReasoningAIME 2025
Accuracy33.33
227
Mathematical ReasoningAMC 2023
Accuracy86.41
65
Scientific ReasoningGPQA Diamond
Score48.77
28
Scientific ReasoningSuperGPQA Eng
Accuracy45
8
Scientific ReasoningSuperGPQA Sci
Accuracy50.25
8
Medical ReasoningSuperGPQA Med
Accuracy0.365
8
Showing 7 of 7 rows

Other info

Follow for update