Generalist Reward Models: Found Inside Large Language Models
About
The alignment of Large Language Models (LLMs) is critically dependent on reward models trained on costly human preference data. While recent work explores bypassing this cost with AI feedback, these methods often lack a rigorous theoretical foundation. In this paper, we discover that a powerful generalist reward model is already latently present within any LLM trained via standard next-token prediction. We prove that this endogenous reward is not a heuristic, but is theoretically equivalent to a reward function learned through offline inverse reinforcement learning. This connection allows us to directly elicit a high-quality reward signal from a base (pre-trained or supervised fine-tuned) model without any further training. Critically, we also prove that subsequent reinforcement learning using this endogenous reward leads to a policy with a provably superior error bound compared to the base model. To our best knowledge, this is the first theoretical proof of the effectiveness of reinforcement learning for LLMs. Our experiments validate this theory, demonstrating that our method not only outperforms existing LLM-as-a-judge approaches but can also surpass explicitly trained reward models. These findings suggest that the reward modeling stage can be replaced by a principled method of eliciting the knowledge already captured during pre-training, heralding a more efficient, powerful, and scalable paradigm for LLMs alignment as well as multi-modal models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AMC | Accuracy (ACC)60.9 | 203 | |
| Mathematical Reasoning | Minerva | Pass@1 Accuracy42.6 | 90 | |
| Mathematical Reasoning | Olympiad | Accuracy0.468 | 68 | |
| Mathematical Reasoning | AMC'23 (test) | Accuracy30 | 60 | |
| Mathematical Reasoning | AMC23 (test) | Pass@110 | 56 | |
| Mathematical Reasoning | AIME 25 | Accuracy25.4 | 45 | |
| Mathematical Reasoning | Math Reasoning Suite Average | Average Accuracy48.9 | 35 | |
| Mathematical Reasoning | MATH 500 | Pass@460.6 | 20 | |
| Mathematical Reasoning | Minerva Math | Accuracy @419.5 | 20 | |
| Process-level Error Localization | PROCESSBENCH | GSM8K Accuracy35 | 20 |