Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DoGE: Domain Reweighting with Generalization Estimation

About

The coverage and composition of the pretraining data significantly impacts the generalization ability of Large Language Models (LLMs). Despite its importance, recent LLMs still rely on heuristics and trial and error to increase or reduce the influence of data-domains. We propose DOmain reweighting with Generalization Estimation (DoGE), which optimizes the probability of sampling from each domain (domain weights) in a principled way. Our approach is a two-stage process consisting of (i) training a proxy model to obtain domain weights using a bi-level optimization algorithm; (ii) training a larger base model by sampling training domains according to the learned domain weights. In our experiments, we extensively show how DoGE improves the generalization of the base model to any target data mixture. On the SlimPajama dataset, our base model gets better perplexity and few-shot reasoning accuracies across $6$ tasks compared to baseline methods. Moreover, aiming to generalize to out-of-domain target tasks, which is unseen in the pretraining corpus (OOD domain), DoGE can effectively identify inter-domain dependencies, and consistently achieves better test perplexity on the target domain.

Simin Fan, Matteo Pagliardini, Martin Jaggi• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy58.5
751
Commonsense ReasoningHellaSwag
HellaSwag Accuracy29.2
350
Language ModelingLAMBADA
Accuracy11.7
268
Common Sense ReasoningCOPA
Accuracy64.5
197
Commonsense ReasoningOBQA
Accuracy27.1
117
Commonsense ReasoningWinoG
Accuracy50.4
48
Commonsense ReasoningHellaSwag
Accuracy26.2
47
Reading ComprehensionSciQ
Accuracy60.1
32
Text RetrievalBEIR-5 (test)
Avg. NDCG@1050.4
26
Commonsense ReasoningLogiQA
Accuracy27.2
21
Showing 10 of 18 rows

Other info

Follow for update