Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FRAME: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy

About

Large language models (LLMs) have significantly advanced human language understanding and generation, with pretraining data quality and organization being crucial to their performance. Multi-stage pretraining is a promising approach, but existing methods often lack quantitative criteria for data partitioning and instead rely on intuitive heuristics. In this paper, we propose the novel Four-quadRAnt Multi-stage prEtraining strategy (FRAME), guided by the established principle of organizing the pretraining process into four stages to achieve significant loss reductions four times. This principle is grounded in two key findings: first, training on high Perplexity (PPL) data followed by low PPL data, and second, training on low PPL difference (PD) data followed by high PD data, both causing the loss to drop significantly twice and performance enhancements. By partitioning data into four quadrants and strategically organizing them, FRAME achieves a remarkable 16.8% average improvement over random across MMLU and CMMLU for the 3B model, effectively boosting LLM performance.

Xuemiao Zhang, Feiyu Duan, Liangyu Xu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, Xunliang Cai• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy50.2
1460
Multi-task Language UnderstandingMMLU
Accuracy43
842
Commonsense ReasoningPIQA
Accuracy76.9
647
ReasoningBBH
Accuracy27.9
507
Question AnsweringARC-E
Accuracy71
242
Question AnsweringSciQ
Accuracy90.5
226
ReasoningARC Easy
Accuracy62.9
183
Question AnsweringARC-C
Accuracy36.5
166
ReasoningARC Challenge
Accuracy26.5
45
Multi-task Language UnderstandingCMMLU
Accuracy45.7
22
Showing 10 of 11 rows

Other info

Follow for update