Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Verifiable Dot to Reward Chain: Harnessing Verifiable Reference-based Rewards for Reinforcement Learning of Open-ended Generation

About

Reinforcement learning with verifiable rewards (RLVR) succeeds in reasoning tasks (e.g., math and code) by checking the final verifiable answer (i.e., a verifiable dot signal). However, extending this paradigm to open-ended generation is challenging because there is no unambiguous ground truth. Relying on single-dot supervision often leads to inefficiency and reward hacking. To address these issues, we propose reinforcement learning with verifiable reference-based rewards (RLVRR). Instead of checking the final answer, RLVRR extracts an ordered linguistic signal from high-quality references (i.e, reward chain). Specifically, RLVRR decomposes rewards into two dimensions: content, which preserves deterministic core concepts (e.g., keywords), and style, which evaluates adherence to stylistic properties through LLM-based verification. In this way, RLVRR combines the exploratory strength of RL with the efficiency and reliability of supervised fine-tuning (SFT). Extensive experiments on more than 10 benchmarks with Qwen and Llama models confirm the advantages of our approach. RLVRR (1) substantially outperforms SFT trained with ten times more data and advanced reward models, (2) unifies the training of structured reasoning and open-ended generation, and (3) generalizes more effectively while preserving output diversity. These results establish RLVRR as a principled and efficient path toward verifiable reinforcement learning for general-purpose LLM alignment. We release our code and data at https://github.com/YJiangcm/RLVRR.

Yuxin Jiang, Yufei Wang, Qiyuan Zhang, Xingshan Zeng, Liangyou Li, Jierun Chen, Chaofan Tao, Haoli Bai, Lifeng Shang• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
Accuracy (0-100)77.7
292
Instruction FollowingAlpacaEval 2.0
LC Win Rate36.7
281
General KnowledgeMMLU
MMLU General Knowledge Accuracy70.2
170
Mathematical Problem SolvingMATH
Accuracy52.6
166
CodeHumanEval
HumanEval Accuracy73
50
Multi-turn conversationMT-Bench
Conversation Rating (1-10)8.7
41
Instruction FollowingFollowBench--
39
Science ReasoningARC
Accuracy84.9
10
Technical problem-solvingArena Hard
Win Rate52.3
10
Showing 9 of 9 rows

Other info

Follow for update