Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Two-Stage Optimizer-Aware Online Data Selection for Large Language Models

About

Gradient-based data selection offers a principled framework for estimating sample utility in large language model (LLM) fine-tuning, but existing methods are mostly designed for offline settings. They are therefore less suited to online fine-tuning, where data arrives sequentially, sample utility is step-dependent, and the effective update geometry is shaped by adaptive optimizers. We propose an optimizer-aware framework for gradient-based online data selection and reweighting in LLM fine-tuning. Our key idea is to view online selection not as static sample ranking, but as shaping the next target-oriented update under the optimizer state. We formulate this as an optimizer-aware update-matching problem, establish its connection to second-order target utility, and show why subset-level construction must account for interactions and redundancy among selected samples. Based on this view, we develop a two-stage Filter-then-Weight algorithm that first filters geometrically useful candidates and then optimizes their coefficients. To make the framework practical for LLMs, we introduce a factorized outer-product gradient representation and optimized matrix computations for long-context data. Experiments show that our method consistently improves convergence and downstream performance over existing online data selection baselines under the same data budget.

Fangxin Wang, Peyman Baghershahi, Langzhou He, Henry Peng Zou, Sourav Medya, Philip S. Yu• 2026

Related benchmarks

TaskDatasetResultRank
Multiple-choice Question AnsweringMMLU 5-shot
Accuracy46.63
73
Multilingual Question AnsweringTyDiQA 1-shot macro-averaged
F1 Score (1-shot macro)48.86
28
Showing 2 of 2 rows

Other info

Follow for update