Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large-Scale Data Selection for Instruction Tuning

About

Selecting high-quality training data from a larger pool is a crucial step when instruction-tuning language models, as carefully curated datasets often produce models that outperform those trained on much larger, noisier datasets. Automated data selection approaches for instruction-tuning are typically tested by selecting small datasets (roughly 10k samples) from small pools (100-200k samples). However, popular deployed instruction-tuned models often train on hundreds of thousands to millions of samples, subsampled from even larger data pools. We present a systematic study of how well data selection methods scale to these settings, selecting up to 2.5M samples from pools of up to 5.8M samples and evaluating across 7 diverse tasks. We show that many recently proposed methods fall short of random selection in this setting (while using more compute), and even decline in performance when given access to larger pools of data to select over. However, we find that a variant of representation-based data selection (RDS+), which uses weighted mean pooling of pretrained LM hidden states, consistently outperforms more complex methods across all settings tested -- all whilst being more compute-efficient. Our findings highlight that the scaling properties of proposed automated selection methods should be more closely examined. We release our code, data, and models at https://github.com/hamishivi/automated-instruction-selection.

Hamish Ivison, Muru Zhang, Faeze Brahman, Pang Wei Koh, Pradeep Dasigi• 2025

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy62.4
756
ReasoningBBH
Accuracy45.1
507
Diagram UnderstandingAI2D (test)
Accuracy50.84
107
Logical reasoningBBH
Accuracy82.28
93
General ReasoningBIG-Bench Hard--
68
Multilingual Question AnsweringTyDiQA
Accuracy62.6
44
Object Hallucination EvaluationPOPE (test)--
44
Optical Character RecognitionOCRBench (test)--
34
Multi-modal EvaluationMME (test)
Perception Score1.41e+3
32
Code GenerationMBPP
MBPP Accuracy83.42
22
Showing 10 of 15 rows

Other info

Follow for update