Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SDFP: Speculative Decoding with FIT-Pruned Models for Training-Free and Plug-and-Play LLM Acceleration

About

Large language models (LLMs) underpin interactive multimedia applications such as captioning, retrieval, recommendation, and creative content generation, yet their autoregressive decoding incurs substantial latency. Speculative decoding reduces latency using a lightweight draft model, but deployment is often limited by the cost and complexity of acquiring, tuning, and maintaining an effective draft model. Recent approaches usually require auxiliary training or specialization, and even training-free methods incur costly search or optimization. We propose SDFP, a fully training-free and plug-and-play framework that builds the draft model via Fisher Information Trace (FIT)-based layer pruning of a given LLM. Using layer sensitivity as a proxy for output perturbation, SDFP removes low-impact layers to obtain a compact draft while preserving compatibility with the original model for standard speculative verification. SDFP needs no additional training, hyperparameter tuning, or separately maintained drafts, enabling rapid, deployment-friendly draft construction. Across benchmarks, SDFP delivers 1.32x-1.5x decoding speedup without altering the target model's output distribution, supporting low-latency multimedia applications.

Hanyu Wei, Zunhai Su, Peng Lu, Chao Li, Spandan Tiwari, Ashish Sirasao, Yuhan Dong• 2026

Related benchmarks

TaskDatasetResultRank
SummarizationCNN/Daily Mail 19 (test)
Speedup (x)1.45
15
Mathematical ReasoningGSM8K 20 (test)
Speedup (x)1.37
15
Narrative GenerationTinyStories 21 (test)
Speedup (x)1.55
15
Speculative Decoding EfficiencyCNN/DM, GSM8K, TinyStories Aggregate
Decoding Speed (tokens/s)28.05
15
Showing 4 of 4 rows

Other info

Follow for update