Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flatter Tokens are More Valuable for Speculative Draft Model Training

About

Speculative Decoding (SD) is a key technique for accelerating Large Language Model (LLM) inference, but it typically requires training a draft model on a large dataset. We approach this problem from a data-centric perspective, finding that not all training samples contribute equally to the SD acceptance rate. Specifically, our theoretical analysis and empirical validation reveals that tokens inducing flatter predictive distributions from the target model are more valuable than those yielding sharply peaked distributions. Based on this insight, we propose flatness, a new metric to quantify this property, and develop the Sample-level-flatness-based Dataset Distillation (SFDD) approach, which filters the training data to retain only the most valuable samples. Experiments on the EAGLE framework demonstrate that SFDD can achieve over 2$\times$ training speedup using only 50% of the data, while keeping the final model's inference speedup within 4% of the full-dataset baseline. This work introduces an effective, data-centric approach that substantially improves the training efficiency for Speculative Decoding. Our code is available at https://github.com/fjm9933/Flatness.

Jiaming Fan, Daming Cao, Xiangzhong Luo, Jiale Fu, Chonghan Liu, Xu Yang• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpaca
Speedup (x)2.66
63
Averaged Performance across five downstream tasksAverage Overall
Speedup2.41
8
Mathematical ReasoningGSM8K
Speedup2.69
8
Multi-turn dialogueMT-Bench (MTB)
Speedup Factor2.44
8
Question AnsweringNatural Questions (NQ)
Speedup2.14
8
SummarizationCNN/DM
Speedup2.14
8
Showing 6 of 6 rows

Other info

Follow for update