Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Instruction Pre-Training: Language Models are Supervised Multitask Learners

About

Unsupervised multitask pre-training has been the critical method behind the recent success of language models (LMs). However, supervised multitask learning still holds significant promise, as scaling it in the post-training stage trends towards better generalization. In this paper, we explore supervised multitask pre-training by proposing Instruction Pre-Training, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train LMs. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of Instruction Pre-Training. In pre-training from scratch, Instruction Pre-Training not only consistently enhances pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, Instruction Pre-Training enables Llama3-8B to be comparable to or even outperform Llama3-70B. Our model, code, and data are available at https://github.com/microsoft/LMOps.

Daixuan Cheng, Yuxian Gu, Shaohan Huang, Junyu Bi, Minlie Huang, Furu Wei• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval--
125
Knowledge-focused evaluationMixEval Hard
Accuracy16.7
8
Knowledge-focused evaluationMixEval Standard
Accuracy19.8
8
Open-ended evaluationMT-Bench 101
Likert Score2.4
8
Showing 4 of 4 rows

Other info

Follow for update