KTO: Model Alignment as Prospect Theoretic Optimization
About
Kahneman & Tversky's $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them belonging to a family of loss functions that we call $\textit{human-aware losses}$ (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach KTO, and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B, despite only learning from a binary signal of whether an output is desirable. More broadly, our work suggests that there is no one HALO that is universally superior; the best loss depends on the inductive biases most appropriate for a given setting, an oft-overlooked consideration.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-turn Dialogue Evaluation | MT-Bench | Overall Score7 | 331 | |
| Instruction Following | AlpacaEval 2.0 | LC Win Rate33.1 | 281 | |
| Mathematical Reasoning | MathQA | Accuracy75.8 | 95 | |
| LLM Alignment Evaluation | AlpacaEval 2 | LC Win Rate43.86 | 72 | |
| LLM Alignment Evaluation | Arena Hard | Win Rate26.8 | 67 | |
| Instruction Following and Helpfulness Evaluation | AlpacaEval 2.0 | Win Rate10 | 58 | |
| AlpacaEval 2.0 | UltraFeedback | LC18.8 | 42 | |
| MT-Bench | UltraFeedback | MT-Bench Score8 | 42 | |
| Safety Alignment Evaluation | Llama-Guard | Harmfulness (%)83.42 | 36 | |
| Language Model Alignment Evaluation | Arena Hard v0.1 | Win Rate (%)30.5 | 36 |