Sequence-level Large Language Model Training with Contrastive Preference Optimization
About
The next token prediction loss is the dominant self-supervised training objective for large language models and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the language model at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.
Zhili Feng, Dhananjay Ram, Cole Hawkins, Aditya Rawal, Jinman Zhao, Sheng Zha• 2025
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval | -- | 1036 | |
| Instruction Following | IFEval | IFEval Accuracy84 | 625 | |
| Mathematical Reasoning | GSM8K | Math Score79.5 | 197 | |
| Graduate-level Question Answering | GPQA | Accuracy35 | 184 | |
| Code Generation | HumanEval | Pass@163.5 | 171 | |
| Multi-task Language Understanding | MMLU | MMLU Score72 | 112 | |
| Multi-task Language Understanding | MMLU | Accuracy66 | 111 | |
| Truthfulness | TruthfulQA | Truthfulness Accuracy53.5 | 86 | |
| Question Answering | TruthfulQA | TruthfulQA Score60 | 61 | |
| Large Language Model Evaluation | MMLU, GSM8K, GPQA, HUMANEVAL, TRUTHFULQA, IFEVAL | MMLU67.3 | 23 |
Showing 10 of 13 rows