Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RegMix: Data Mixture as Regression for Language Model Pre-training

About

The data mixture for large language model pre-training significantly impacts performance, yet how to determine an effective mixture remains unclear. We propose RegMix to automatically identify a high-performing data mixture by formulating it as a regression task. RegMix trains many small models on diverse data mixtures, uses regression to predict performance of unseen mixtures, and applies the best predicted mixture to train a large-scale model with orders of magnitude more compute. To empirically validate RegMix, we train 512 models with 1M parameters for 1B tokens to fit the regression model and predict the best data mixture. Using this mixture we train a 1B parameter model for 25B tokens (i.e. 1000x larger and 25x longer) which we find performs best among 64 candidate 1B parameter models with other mixtures. Furthermore, RegMix consistently outperforms human selection in experiments involving models up to 7B models trained on 100B tokens, while matching or exceeding DoReMi using just 10% of the computational resources. Our experiments also show that (1) Data mixtures significantly impact performance; (2) Web corpora rather than data perceived as high-quality like Wikipedia have the strongest positive correlation with downstream performance; (3) Domains interact in complex ways often contradicting common sense, thus automatic approaches like RegMix are needed; (4) Data mixture effects transcend scaling laws. Our code is available at https://github.com/sail-sg/regmix.

Qian Liu, Xiaosen Zheng, Niklas Muennighoff, Guangtao Zeng, Longxu Dou, Tianyu Pang, Jing Jiang, Min Lin• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Commonsense ReasoningHellaSwag
HellaSwag Accuracy65.15
350
Mathematical ReasoningMathQA--
305
Math ReasoningGSM8K
Accuracy55.9
187
Question AnsweringARC Challenge
Accuracy (ARC)37.17
142
Math ReasoningMATH
Accuracy20.35
121
Question AnsweringOpenBookQA
Accuracy36.6
119
General Language Understanding and ReasoningGeneral Benchmarks MMLU, HellaSwag, OBQA, WinoGrande, ARC-C, PiQA, SciQ, LogiQA
MMLU Accuracy35.68
70
Multi-task Language UnderstandingMMLU
MMLU Accuracy35.03
59
Mathematical ReasoningMath Benchmarks GSM8K, Minerva, MATH, MathQA
GSM8K Score55.9
53
Showing 10 of 17 rows

Other info

Follow for update