Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale

About

Large volumes of text data have contributed significantly to the development of large language models (LLMs) in recent years. This data is typically acquired by scraping the internet, leading to pretraining datasets comprised of noisy web text. To date, efforts to prune these datasets down to a higher quality subset have relied on hand-crafted heuristics encoded as rule-based filters. In this work, we take a wider view and explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data. We perform a rigorous comparison at scale of the simple data quality estimator of perplexity, as well as more sophisticated and computationally intensive estimates of the Error L2-Norm and memorization. These metrics are used to rank and prune pretraining corpora, and we subsequently compare LLMs trained on these pruned datasets. Surprisingly, we find that the simple technique of perplexity outperforms our more computationally expensive scoring methods. We improve over our no-pruning baseline while training on as little as 30% of the original training dataset. Our work sets the foundation for unexplored strategies in automatically curating high quality corpora and suggests the majority of pretraining data can be removed while retaining performance.

Max Marion, Ahmet \"Ust\"un, Luiza Pozzobon, Alex Wang, Marzieh Fadaee, Sara Hooker• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy83.5
1455
Visual Question AnsweringVQA v2
Accuracy66.1
1362
Text-based Visual Question AnsweringTextVQA
Accuracy54.6
807
Multimodal EvaluationMME--
658
Multimodal ReasoningMM-Vet
MM-Vet Score30.7
431
Multimodal EvaluationMM-Vet
Score29.3
180
Diagram UnderstandingAI2D (test)
Accuracy38.02
131
Multimodal EvaluationMMBench
MMB Score25.7
118
Science Question AnsweringScienceQA SQA-I
Accuracy53.3
103
Multimodal EvaluationSEED-Bench
Accuracy38.8
95
Showing 10 of 22 rows

Other info

Follow for update