Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

About

The mixture proportions of pretraining data domains (e.g., Wikipedia, books, web text) greatly affect language model (LM) performance. In this paper, we propose Domain Reweighting with Minimax Optimization (DoReMi), which first trains a small proxy model using group distributionally robust optimization (Group DRO) over domains to produce domain weights (mixture proportions) without knowledge of downstream tasks. We then resample a dataset with these domain weights and train a larger, full-sized model. In our experiments, we use DoReMi on a 280M-parameter proxy model to set the domain weights for training an 8B-parameter model (30x larger) more efficiently. On The Pile, DoReMi improves perplexity across all domains, even when it downweights a domain. DoReMi improves average few-shot downstream accuracy by 6.5% points over a baseline model trained using The Pile's default domain weights and reaches the baseline accuracy with 2.6x fewer training steps. On the GLaM dataset, DoReMi, which has no knowledge of downstream tasks, even matches the performance of using domain weights tuned on downstream tasks.

Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu• 2023

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
1036
Language UnderstandingMMLU
Accuracy56.9
825
Mathematical ReasoningGSM8K (test)
Accuracy45.87
770
Commonsense ReasoningPIQA
Accuracy58.3
751
Code GenerationHumanEval (test)--
506
Commonsense ReasoningHellaSwag
HellaSwag Accuracy29.4
350
Science Question AnsweringARC Challenge
Accuracy52.9
342
Multitask Language UnderstandingMMLU (test)
Accuracy44.94
303
Code GenerationMBPP (test)--
298
Language ModelingLAMBADA
Accuracy12.4
268
Showing 10 of 42 rows

Other info

Follow for update