Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Preference Curriculum: LLMs Should Always Be Pretrained on Their Preferred Data

About

Large language models (LLMs) generally utilize a consistent data distribution throughout the pretraining process. However, as the model's capability improves, it is intuitive that its data preferences dynamically change, indicating the need for pretraining with different data at various training stages. To achieve it, we propose the Perplexity Difference (PD) based Preference Curriculum learning (PDPC) framework, which always perceives and uses the data preferred by LLMs to train and boost them. First, we introduce the PD metric to quantify the difference in how challenging a sample is for weak versus strong models. Samples with high PD are more challenging for weak models to learn and are more suitable to be arranged in the later stage of pretraining. Second, we propose the preference function to approximate and predict the data preference of the LLM at any training step, so as to complete the arrangement of the dataset offline and ensure continuous training without interruption. Experimental results on 1.3B and 3B models demonstrate that PDPC significantly surpasses baselines. Notably, the 3B model trained on 1T tokens achieves an increased average accuracy of over 8.1% across MMLU and CMMLU.

Xuemiao Zhang, Liangyu Xu, Feiyu Duan, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, Xunliang Cai• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy49.9
1891
Question AnsweringARC Challenge
Accuracy26.6
906
Multi-task Language UnderstandingMMLU
Accuracy35.8
876
Commonsense ReasoningPIQA
Accuracy76.3
751
ReasoningBBH
Accuracy25.7
672
Question AnsweringARC-E
Accuracy69.7
416
Question AnsweringARC Easy
Normalized Acc57.3
389
Physical Interaction Question AnsweringPIQA
Accuracy68
333
Question AnsweringSciQ
Accuracy87.9
283
Question AnsweringARC-C
Accuracy35.8
192
Showing 10 of 16 rows

Other info

Follow for update