Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Training-free Dropout Sampling for Semantic Token Acceptance in Speculative Decoding

About

Speculative decoding accelerates large language model inference by proposing tokens with a lightweight draft model and selectively accepting them using a target model. This work introduces DropMatch, a novel approach that matches draft tokens to the predictive distribution of the target model via Monte Carlo dropout applied exclusively to the LM head, enabling sampling-based acceptance decisions. By generating multiple decoding paths, our method forms an empirical token distribution against which draft tokens are evaluated for consistency. This acceptance mechanism enables the model to adaptively control the size of decoding paths under an appropriate dropout probability, preventing substantial distortion of the target model predictive distribution. The proposed method operates in a training-free, data-free, and calibration-free manner, requires no architectural modification to pretrained models, and can be orthogonally integrated with a wide range of existing speculative decoding and inference acceleration techniques. Experiments across multiple benchmarks demonstrate that our approach increases acceptance length while maintaining competitive task performance, yielding inference speedups ranging from 1.09x to 1.33x over the standard baseline, and up to an additional 1.09x speedup when applied on top of EAGLE3.

Jeongtae Lee, Minjung Jo, Hyunjoon Jeong, Gunho Park, Sunghyeon Woo, Joonghoon Kim, Se Jung Kwon, Dongsoo Lee• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
IFEval Accuracy86.17
625
Mathematical ReasoningGSM8K
Speed Up (x)5.03
246
Instruction FollowingMT-Bench
MT-Bench Score8.59
215
Instruction FollowingAlpaca
Speedup (x)5.27
111
Mathematical ReasoningGSM8K 8-shot
Accuracy94.1
26
Multi-task Language UnderstandingMMLU
Speedup1.49
8
Machine TranslationKoMT-bench
Score8.12
3
Showing 7 of 7 rows

Other info

Follow for update