Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

IPO: Your Language Model is Secretly a Preference Classifier

About

Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences. While it enables LLMs to achieve human-level alignment, it often incurs significant computational and financial costs due to its reliance on training external reward models or human-labeled preferences. In this work, we propose Implicit Preference Optimization (IPO), an alternative approach that leverages generative LLMs as preference classifiers, thereby reducing the dependence on external human feedback or reward models to obtain preferences. We conduct a comprehensive evaluation on the preference classification ability of LLMs using RewardBench, assessing models across different sizes, architectures, and training levels to validate our hypothesis. Furthermore, we investigate the self-improvement capabilities of LLMs by generating multiple responses for a given instruction and employing the model itself as a preference classifier for Direct Preference Optimization (DPO)-based training. Our findings demonstrate that models trained through IPO achieve performance comparable to those utilizing state-of-the-art reward models for obtaining preferences.

Shivank Garg, Ayush Singh, Shweta Singh, Paras Chopra• 2025

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy37.6
825
ReasoningBBH
Accuracy34.6
672
Instruction FollowingIFEval--
625
Question AnsweringARC Easy
Normalized Acc82.2
389
Instruction FollowingAlpacaEval
Win Rate78.2
227
Reward ModelingRewardBench
Accuracy78.02
166
Bias EvaluationBBQ
Accuracy89.1
113
Out-of-Domain (OOD) Bias EvaluationWinobias
Accuracy0.501
14
Structural Bias EvaluationHANS
Accuracy97.7
14
Stereotypical Bias MitigationUNQOVER
Accuracy99.6
14
Showing 10 of 17 rows

Other info

Code

Follow for update