Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Play Preference Optimization for Language Model Alignment

About

Standard reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed Self-Play Preference Optimization (SPPO), utilizes iterative policy updates to provably approximate the Nash equilibrium. Additionally, we propose a new SPPO objective which is both strongly motivated by theory and is simple and effective in practice. In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench, Arena-Hard, and the Open LLM Leaderboard. Starting from a stronger base model Llama-3-8B-Instruct, we are able to achieve a length-controlled win rate of 38.77%. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models. Codes are available at https://github.com/uclaml/SPPO.

Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, Quanquan Gu• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
IFEval Accuracy75.47
625
Instruction FollowingAlpacaEval 2.0
Win Rate48.5
507
Instruction FollowingMT-Bench
MT-Bench Score6.86
215
KnowledgeMMLU
Accuracy75.37
136
Instruction FollowingArena Hard
Win Rate43.89
103
LLM Alignment EvaluationAlpacaEval 2.0 (test)
LC Win Rate28.48
51
Human Preference AlignmentPKU-SafeRLHF
BLEU0.309
31
Preference AlignmentHH-RLHF
BLEU0.231
31
Commonsense ReasoningTruthfulQA
Accuracy71.48
28
Commonsense ReasoningARC
Accuracy91.17
28
Showing 10 of 15 rows

Other info

Follow for update