Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty-Aware Gradient Signal-to-Noise Data Selection for Instruction Tuning

About

Instruction tuning is a standard paradigm for adapting large language models (LLMs), but modern instruction datasets are large, noisy, and redundant, making full-data fine-tuning costly and often unnecessary. Existing data selection methods either build expensive gradient datastores or assign static scores from a weak proxy, largely ignoring evolving uncertainty, and thus missing a key source of LLM interpretability. We propose GRADFILTERING, an objective-agnostic, uncertainty-aware data selection framework that utilizes a small GPT-2 proxy with a LoRA ensemble and aggregates per-example gradients into a Gradient Signal-to-Noise Ratio (G-SNR) utility. Our method matches or surpasses random subsets and strong baselines in most LLM-as-a-judge evaluations as well as in human assessment. Moreover, GRADFILTERING-selected subsets converge faster than competitive filters under the same compute budget, reflecting the benefit of uncertainty-aware scoring.

Zhihang Yuan, Chengyu Yue, Long Huang, Litu Ou, Lei Shi• 2026

Related benchmarks

TaskDatasetResultRank
Instruction TuningAlpaca instruction-tuning 52k
Pairwise Winning Score116
19
Showing 1 of 1 rows

Other info

Follow for update