Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models
About
In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smaller models can be used for perplexity-based pruning and how pruning is affected by the domain composition of the data being pruned. We demonstrate that for multiple dataset compositions, perplexity-based pruning of pretraining data can \emph{significantly} improve downstream task performance: pruning based on perplexities computed with a 125 million parameter model improves the average performance on downstream tasks of a 3 billion parameter model by up to 2.04 and achieves up to a $1.45\times$ reduction in pretraining steps to reach commensurate baseline performance. Furthermore, we demonstrate that such perplexity-based data pruning also yields downstream performance gains in the over-trained and data-constrained regimes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reasoning | BBH | Accuracy9.88 | 507 | |
| Commonsense Reasoning | StoryCloze | Accuracy67.34 | 34 | |
| Reading Comprehension | RACE-m | Accuracy0.2437 | 28 | |
| Zero-shot Language Understanding and Reasoning | BENCH-PROXY (MMLU, ANLI, HellaSwag, PIQA, SIQA, W.G., ARC-E, ARC-C, C.QA, WSC) (test) | MMLU33.17 | 24 | |
| Reading Comprehension | RACE | -- | 12 | |
| Natural Language Inference | AX-g | Accuracy51.12 | 9 | |
| Natural Language Inference | AX-b | Accuracy54.98 | 9 |