Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

daVinci-LLM:Towards the Science of Pretraining

About

The foundational pretraining phase determines a model's capability ceiling, as post-training struggles to overcome capability foundations established during pretraining, yet it remains critically under-explored. This stems from a structural paradox: organizations with computational resources operate under commercial pressures that inhibit transparent disclosure, while academic institutions possess research freedom but lack pretraining-scale computational resources. daVinci-LLM occupies this unexplored intersection, combining industrial-scale resources with full research freedom to advance the science of pretraining. We adopt a fully-open paradigm that treats openness as scientific methodology, releasing complete data processing pipelines, full training processes, and systematic exploration results. Recognizing that the field lacks systematic methodology for data processing, we employ the Data Darwinism framework, a principled L0-L9 taxonomy from filtering to synthesis. We train a 3B-parameter model from random initialization across 8T tokens using a two-stage adaptive curriculum that progressively shifts from foundational capabilities to reasoning-intensive enhancement. Through 200+ controlled ablations, we establish that: processing depth systematically enhances capabilities, establishing it as a critical dimension alongside volume scaling; different domains exhibit distinct saturation dynamics, necessitating adaptive strategies from proportion adjustments to format shifts; compositional balance enables targeted intensification while preventing performance collapse; how evaluation protocol choices shape our understanding of pretraining progress. By releasing the complete exploration process, we enable the community to build upon our findings and systematic methodologies to form accumulative scientific knowledge in pretraining.

Yiwei Qin, Yixiu Liu, Tiantian Mi, Muhang Xie, Zhen Huang, Weiye Si, Pengrui Lu, Siyuan Feng, Xia Wu, Liming Liu, Ye Luo, Jinlong Hou, Qipeng Guo, Yu Qiao, Pengfei Liu• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Physical Commonsense ReasoningPIQA
Accuracy77.26
572
Commonsense ReasoningHellaSwag
HellaSwag Accuracy71.17
350
Mathematical ReasoningGSM-PLUS
Accuracy50.38
66
Code GenerationEvalPlus
Pass@157.32
61
STEM KnowledgeMMLU STEM
Accuracy53.41
26
Human-level Standardized Exam EvaluationAGIEval
Score26.77
14
Science KnowledgeMMLU Pro STEM
Science Knowledge Score (MMLU Pro STEM)46.7
8
Graduate-level Science QAGPQA Main
Score32.37
8
Science Question AnsweringSuperGPQA
Score19.56
8
Showing 10 of 11 rows

Other info

GitHub

Follow for update