Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Data Mixing from the Perspective of Large Language Models

About

Data mixing strategy is essential for large language model (LLM) training. Empirical evidence shows that inappropriate strategies can significantly reduce generalization. Although recent methods have improved empirical performance, several fundamental questions remain open: what constitutes a domain, whether human and model perceptions of domains are aligned, and how domain weighting influences generalization. We address these questions by establishing formal connections between gradient dynamics and domain distributions, offering a theoretical framework that clarifies the role of domains in training dynamics. Building on this analysis, we introduce DoGraph, a reweighting framework that formulates data scheduling as a graph-constrained optimization problem. Extensive experiments on GPT-2 models of varying scales demonstrate that DoGraph consistently achieves competitive performance.

Yuanjian Xu, Tianze Sun, Changwei Xu, XinLong Zhao, Jianing Hao, Ran Chen, Yang Liu, Ruijie Xu, Stephen Chen, Guang Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy59.2
751
Commonsense ReasoningHellaSwag
HellaSwag Accuracy29.8
350
Language ModelingLAMBADA
Accuracy15.9
268
Common Sense ReasoningCOPA
Accuracy65
197
Commonsense ReasoningOBQA
Accuracy27.8
117
Commonsense ReasoningWinoG
Accuracy51.2
48
Commonsense ReasoningHellaSwag
Accuracy26.3
47
Reading ComprehensionSciQ
Accuracy66.1
32
Commonsense ReasoningLogiQA
Accuracy28.3
21
Reading ComprehensionARC-E
Accuracy39.2
21
Showing 10 of 15 rows

Other info

Follow for update