Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

YuLan-Mini: An Open Data-efficient Language Model

About

Effective pre-training of large language models (LLMs) has been challenging due to the immense resource demands and the complexity of the technical processes involved. This paper presents a detailed technical report on YuLan-Mini, a highly capable base model with 2.42B parameters that achieves top-tier performance among models of similar parameter scale. Our pre-training approach focuses on enhancing training efficacy through three key technical contributions: an elaborate data pipeline combines data cleaning with data schedule strategies, a robust optimization method to mitigate training instability, and an effective annealing approach that incorporates targeted data selection and long context training. Remarkably, YuLan-Mini, trained on 1.08T tokens, achieves performance comparable to industry-leading models that require significantly more data. To facilitate reproduction, we release the full details of the data composition for each training phase. Project details can be accessed at the following link: https://github.com/RUC-GSAI/YuLan-Mini.

Yiwen Hu, Huatong Song, Jia Deng, Jiapeng Wang, Jie Chen, Kun Zhou, Yutao Zhu, Jinhao Jiang, Zican Dong, Wayne Xin Zhao, Ji-Rong Wen• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Physical Commonsense ReasoningPIQA
Accuracy76.22
572
Commonsense ReasoningHellaSwag
HellaSwag Accuracy68.56
350
Mathematical ReasoningGSM-PLUS
Accuracy43.71
66
Code GenerationEvalPlus
Pass@162.25
61
STEM KnowledgeMMLU STEM
Accuracy44.12
26
Human-level Standardized Exam EvaluationAGIEval
Score28.22
14
Question AnsweringOpenBookQA
OpenBookQA Score43
8
Science Question AnsweringSuperGPQA
Score15.53
8
Graduate-level Science QAGPQA Main
Score29.91
8
Showing 10 of 11 rows

Other info

Follow for update