Bradley-Terry Policy Optimization for Generative Preference Modeling
About
Reinforcement learning (RL) has recently proven effective at scaling chain-of-thought (CoT) reasoning in large language models for tasks with verifiable answers. However, extending RL-based thought training to more general non-verifiable tasks-where supervision is provided only through pairwise human preferences-remains challenging. Existing approaches typically apply RL objectives designed for verifiable rewards to preference-based settings in a heuristic manner. In this work, we show that introducing CoT reasoning into preference modeling fundamentally changes the structure of the Bradley-Terry (BT) likelihood, as the reasoning process must be treated as a latent variable. This results in a preference likelihood expressed as a ratio of expectations over stochastic generation trajectories, which cannot be optimized using Jensen-style bounds or standard RL objectives. To address this challenge, we derive a consistent Monte Carlo estimator for the gradient of the resulting likelihood, leading to Bradley-Terry Policy Optimization (BTPO). Empirically, BTPO enables stable and effective training of generative preference models with CoT reasoning, consistently outperforming prior heuristic approaches across multiple benchmarks and model scales.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Preference Modeling | Helpfulness & Harmlessness | Accuracy72.2 | 20 | |
| Preference Modeling | Instruction Following | Accuracy65.2 | 20 | |
| Preference Modeling | Math Reasoning | Accuracy87.6 | 20 | |
| Preference Classification | Helpfulness & Harmlessness (HH) (test) | Classification Accuracy70.4 | 4 | |
| Preference Classification | Instruction Following (IF) (test) | Accuracy61.4 | 4 | |
| Preference Classification | Math Reasoning (test) | Classification Accuracy85.4 | 4 |