Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment

About

We study the problem of aligning large language models (LLMs) with human preference data. Contrastive preference optimization has shown promising results in aligning LLMs with available preference data by optimizing the implicit reward associated with the policy. However, the contrastive objective focuses mainly on the relative values of implicit rewards associated with two responses while ignoring their actual values, resulting in suboptimal alignment with human preferences. To address this limitation, we propose calibrated direct preference optimization (Cal-DPO), a simple yet effective algorithm. We show that substantial improvement in alignment with the given preferences can be achieved simply by calibrating the implicit reward to ensure that the learned implicit rewards are comparable in scale to the ground-truth rewards. We demonstrate the theoretical advantages of Cal-DPO over existing approaches. The results of our experiments on a variety of standard benchmarks show that Cal-DPO remarkably improves off-the-shelf methods.

Teng Xiao, Yige Yuan, Huaisheng Zhu, Mingxiao Li, Vasant G Honavar• 2024

Related benchmarks

TaskDatasetResultRank
LLM Alignment EvaluationAlpacaEval 2.0 (test)
LC Win Rate4.56
51
Dialogue GenerationAnthropic HH (test)
Average Preference Score69.07
16
Sentiment Control Language GenerationIMDB
Perplexity32.31
14
SummarizationReddit TL;DR (test)
Preference vs SFT (%)75.61
8
Reasoning and Language UnderstandingOpen LLM Leaderboard MMLU-PRO, IFEval, BBH, GPQA, MATH, GSM8K, ARC v0.4.0 (test)
MMLU-PRO28.38
7
Showing 5 of 5 rows

Other info

Code

Follow for update