Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

About

The mixture proportions of pretraining data domains (e.g., Wikipedia, books, web text) greatly affect language model (LM) performance. In this paper, we propose Domain Reweighting with Minimax Optimization (DoReMi), which first trains a small proxy model using group distributionally robust optimization (Group DRO) over domains to produce domain weights (mixture proportions) without knowledge of downstream tasks. We then resample a dataset with these domain weights and train a larger, full-sized model. In our experiments, we use DoReMi on a 280M-parameter proxy model to set the domain weights for training an 8B-parameter model (30x larger) more efficiently. On The Pile, DoReMi improves perplexity across all domains, even when it downweights a domain. DoReMi improves average few-shot downstream accuracy by 6.5% points over a baseline model trained using The Pile's default domain weights and reaches the baseline accuracy with 2.6x fewer training steps. On the GLaM dataset, DoReMi, which has no knowledge of downstream tasks, even matches the performance of using domain weights tuned on downstream tasks.

Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, Adams Wei Yu• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy45.87
751
Code GenerationHumanEval (test)--
444
Multitask Language UnderstandingMMLU (test)
Accuracy44.94
303
Code GenerationMBPP (test)--
276
Science Question AnsweringARC Challenge
Accuracy25.84
234
Science Question AnsweringARC-E
Accuracy72.59
138
Commonsense ReasoningWinoGrande (val)
Accuracy56.75
87
Commonsense ReasoningCommonsenseQA (val)
Accuracy65.11
52
Text RetrievalBEIR-5 (test)
Avg. NDCG@1049
26
Science Question AnsweringARC Easy
Accuracy27.75
26
Showing 10 of 18 rows

Other info

Follow for update