Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Triviality Corrected Endogenous Reward

About

Reinforcement learning for open-ended text generation is constrained by the lack of verifiable rewards, necessitating reliance on judge models that require either annotated data or powerful closed-source models. Inspired by recent work on unsupervised reinforcement learning for mathematical reasoning using confidence-based endogenous rewards, we investigate whether this principle can be adapted to open-ended writing tasks. We find that directly applying confidence rewards leads to Triviality Bias: the policy collapses toward high-probability outputs, reducing diversity and meaningful content. We propose TCER (Triviality Corrected Endogenous Reward), which addresses this bias by rewarding the relative information gain between a specialist policy and a generalist reference policy, modulated by a probability-dependent correction mechanism. Across multiple writing benchmarks and model architectures, TCER achieves consistent improvements without external supervision. Furthermore, TCER also transfers effectively to mathematical reasoning, validating the generality of our approach across different generation tasks.

Xinda Wang, Zhengxu Hou, Yangshijie Zhang, Bingren Yan, Jialin Liu, Chenzhuo Zhao, Zhibo Yang, Bin-Bin Yang, Feng Xiao• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC
Accuracy (ACC)62.1
203
Mathematical ReasoningMinerva
Pass@1 Accuracy44.5
90
Mathematical ReasoningOlympiad
Accuracy0.496
68
Mathematical ReasoningAIME 25
Accuracy26.1
45
Mathematical ReasoningMath Reasoning Suite Average
Average Accuracy50.2
35
Mathematical ReasoningAIME 24
Pass@1 Accuracy32.4
19
Controllable writingWritingBench (WB)
WB-A Score76.5
17
Open-ended generationHelloBench (HB)
HB-A Score82.2
17
WritingLongBench-Write (LB)
LB Score86.3
11
Long-form text generationLongBench
LB Score87.3
6
Showing 10 of 10 rows

Other info

Follow for update