Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models

About

In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smaller models can be used for perplexity-based pruning and how pruning is affected by the domain composition of the data being pruned. We demonstrate that for multiple dataset compositions, perplexity-based pruning of pretraining data can \emph{significantly} improve downstream task performance: pruning based on perplexities computed with a 125 million parameter model improves the average performance on downstream tasks of a 3 billion parameter model by up to 2.04 and achieves up to a $1.45\times$ reduction in pretraining steps to reach commensurate baseline performance. Furthermore, we demonstrate that such perplexity-based data pruning also yields downstream performance gains in the over-trained and data-constrained regimes.

Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L. Leavitt, Mansheej Paul• 2024

Related benchmarks

TaskDatasetResultRank
ReasoningBBH
Accuracy9.88
507
Commonsense ReasoningStoryCloze
Accuracy67.34
34
Reading ComprehensionRACE-m
Accuracy0.2437
28
Zero-shot Language Understanding and ReasoningBENCH-PROXY (MMLU, ANLI, HellaSwag, PIQA, SIQA, W.G., ARC-E, ARC-C, C.QA, WSC) (test)
MMLU33.17
24
Reading ComprehensionRACE--
12
Natural Language InferenceAX-g
Accuracy51.12
9
Natural Language InferenceAX-b
Accuracy54.98
9
Showing 7 of 7 rows

Other info

Follow for update