Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sequence-level Large Language Model Training with Contrastive Preference Optimization

About

The next token prediction loss is the dominant self-supervised training objective for large language models and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the language model at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.

Zhili Feng, Dhananjay Ram, Cole Hawkins, Aditya Rawal, Jinman Zhao, Sheng Zha• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
1036
Instruction FollowingIFEval
IFEval Accuracy84
625
Mathematical ReasoningGSM8K
Math Score79.5
197
Graduate-level Question AnsweringGPQA
Accuracy35
184
Code GenerationHumanEval
Pass@163.5
171
Multi-task Language UnderstandingMMLU
MMLU Score72
112
Multi-task Language UnderstandingMMLU
Accuracy66
111
TruthfulnessTruthfulQA
Truthfulness Accuracy53.5
86
Question AnsweringTruthfulQA
TruthfulQA Score60
61
Large Language Model EvaluationMMLU, GSM8K, GPQA, HUMANEVAL, TRUTHFULQA, IFEVAL
MMLU67.3
23
Showing 10 of 13 rows

Other info

Follow for update