Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compute-Constrained Data Selection

About

Data selection can reduce the amount of training data needed to finetune LLMs; however, the efficacy of data selection scales directly with its compute. Motivated by the practical challenge of compute-constrained finetuning, we consider the setting in which both the cost of selecting data and training are budgeted for. We first formalize the problem of data selection with a cost-aware utility function, and model the data selection problem as trading off initial-selection cost for training gain. We run a comprehensive sweep of experiments across multiple tasks, varying compute budget by scaling finetuning tokens, model sizes, and data selection compute. Interestingly we find that many powerful data selection methods are almost never compute-optimal, and that cheaper data selection alternatives dominate both from a theoretical and empirical perspective. For compute-optimal training, we find that perplexity and gradient data selection require training-to-selection model size ratios of 5x and 10x, respectively.

Junjie Oscar Yin, Alexander M. Rush• 2024

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy61.7
756
ReasoningBBH
Accuracy43.1
507
Logical reasoningBBH
Accuracy82.49
93
General ReasoningBIG-Bench Hard--
68
Multilingual Question AnsweringTyDiQA
Accuracy58.4
44
Code GenerationMBPP
MBPP Accuracy83.98
22
Mathematical ReasoningGSM8K
GSM Score91.22
7
Mathematical Reasoninggsm
GSM Accuracy91.01
7
Multitask Language UnderstandingMMLU
MMLU Score76.5
7
Showing 9 of 9 rows

Other info

Follow for update