Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence

About

Direct Preference Optimization (DPO) has emerged as a prominent algorithm for the direct and robust alignment of Large Language Models (LLMs) with human preferences, offering a more straightforward alternative to the complex Reinforcement Learning from Human Feedback (RLHF). Despite its promising efficacy, DPO faces a notable drawback: "verbosity", a common over-optimization phenomenon also observed in RLHF. While previous studies mainly attributed verbosity to biased labels within the data, we propose that the issue also stems from an inherent algorithmic length reliance in DPO. Specifically, we suggest that the discrepancy between sequence-level Kullback-Leibler (KL) divergences between chosen and rejected sequences, used in DPO, results in overestimated or underestimated rewards due to varying token lengths. Empirically, we utilize datasets with different label lengths to demonstrate the presence of biased rewards. We then introduce an effective downsampling approach, named SamPO, to eliminate potential length reliance. Our experimental evaluations, conducted across three LLMs of varying scales and a diverse array of conditional and open-ended benchmarks, highlight the efficacy of SamPO in mitigating verbosity, achieving improvements of 5% to 12% over DPO through debaised rewards. Our codes can be accessed at: https://github.com/LuJunru/SamPO/.

Junru Lu, Jiazheng Li, Siyu An, Meng Zhao, Yulan He, Di Yin, Xing Sun• 2024

Related benchmarks

TaskDatasetResultRank
Multi-turn Dialogue EvaluationMT-Bench
Overall Score8.21
331
Physical Commonsense ReasoningPIQA
Accuracy80.74
329
Instruction FollowingIFEval--
292
Mathematical ReasoningGSM8K
EM61.33
115
LLM Alignment EvaluationAlpacaEval 2.0 (test)
LC Win Rate27.45
51
Language UnderstandingMMLU
MMLU Score70.67
45
Scientific ReasoningARC
Score86.32
29
Instruction FollowingAlpacaEval UltraFeedback 2 (test)
LC Win Rate52.17
12
Instruction FollowingAlpacaEval Helpsteer2 2 (test)
LC Win Rate26.95
12
Truthfulness EvaluationTruthfulQA
Normalized Accuracy58.44
10
Showing 10 of 11 rows

Other info

Follow for update