Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data Selection for Language Models via Importance Resampling

About

Selecting a suitable pretraining dataset is crucial for both general-domain (e.g., GPT-3) and domain-specific (e.g., Codex) language models (LMs). We formalize this problem as selecting a subset of a large raw unlabeled dataset to match a desired target distribution given unlabeled target samples. Due to the scale and dimensionality of the raw text data, existing methods use simple heuristics or require human experts to manually curate data. Instead, we extend the classic importance resampling approach used in low-dimensions for LM data selection. We propose Data Selection with Importance Resampling (DSIR), an efficient and scalable framework that estimates importance weights in a reduced feature space for tractability and selects data with importance resampling according to these weights. We instantiate the DSIR framework with hashed n-gram features for efficiency, enabling the selection of 100M documents from the full Pile dataset in 4.5 hours. To measure whether hashed n-gram features preserve the aspects of the data that are relevant to the target, we define KL reduction, a data metric that measures the proximity between the selected pretraining data and the target on some feature space. Across 8 data selection methods (including expert selection), KL reduction on hashed n-gram features highly correlates with average downstream accuracy (r=0.82). When selecting data for continued pretraining on a specific domain, DSIR performs comparably to expert curation across 8 target distributions. When pretraining general-domain models (target is Wikipedia and books), DSIR improves over random selection and heuristic filtering baselines by 2-2.5% on the GLUE benchmark. Code is available at https://github.com/p-lambda/dsir.

Sang Michael Xie, Shibani Santurkar, Tengyu Ma, Percy Liang• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy42
983
Mathematical ReasoningMATH
Accuracy12.3
643
ReasoningBBH
Accuracy55.97
507
Commonsense ReasoningPIQA 1.0 (test)
Accuracy81.94
48
Commonsense ReasoningStoryCloze
Accuracy67.72
34
Reading ComprehensionRACE-m
Accuracy0.2507
28
Zero-shot Language Understanding and ReasoningBENCH-PROXY (MMLU, ANLI, HellaSwag, PIQA, SIQA, W.G., ARC-E, ARC-C, C.QA, WSC) (test)
MMLU34.13
24
Commonsense ReasoningHellaSwag 1.0 (test)
Accuracy63.1
17
Commonsense ReasoningWinoGrande 1.0 (test)
Accuracy0.8137
15
World Knowledge and Reading ComprehensionLM Evaluation Harness NQ, MMLU STEM, ARC, SciQ, LogiQA, BoolQ
NQ Accuracy29.22
15
Showing 10 of 14 rows

Other info

Code

Follow for update