Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive

About

Direct Preference Optimisation (DPO) is effective at significantly improving the performance of large language models (LLMs) on downstream tasks such as reasoning, summarisation, and alignment. Using pairs of preferred and dispreferred data, DPO models the relative probability of picking one response over another. In this work, first we show theoretically that the standard DPO loss can lead to a reduction of the model's likelihood of the preferred examples, as long as the relative probability between the preferred and dispreferred classes increases. We then show empirically that this phenomenon occurs when fine-tuning LLMs on common datasets, especially datasets in which the edit distance between pairs of completions is low. Using these insights, we design DPO-Positive (DPOP), a new loss function and training procedure which avoids this failure mode. Surprisingly, we find that DPOP outperforms DPO and other fine-tuning procedures across a wide variety of datasets and downstream tasks, including datasets with high edit distances between completions. Furthermore, we find that the DPOP-tuned model outperforms the DPO-tuned model (all else equal) on benchmarks independent of the fine-tuning data, such as MT-Bench. Finally, using DPOP, we create and open-source Smaug-34B and Smaug-72B, with the latter becoming the first open-source LLM to surpass an average accuracy of 80% on the HuggingFace Open LLM Leaderboard.

Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, Colin White• 2024

Related benchmarks

TaskDatasetResultRank
LLM Alignment EvaluationAlpacaEval 2.0 (test)
LC Win Rate4.53
51
Machine TranslationFlores-200 Romance group en->xx (test)
BLEU31.53
46
Machine TranslationFlores-200 Romance group xx->en (test)
BLEU35.78
46
Dialogue GenerationAnthropic HH (test)
Average Preference Score63.34
16
Sentiment Control Language GenerationIMDB
Perplexity35.58
14
SummarizationReddit TL;DR (test)
Preference vs SFT (%)72.95
8
Showing 6 of 6 rows

Other info

Follow for update