Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speculative Decoding with a Speculative Vocabulary

About

Speculative decoding has rapidly emerged as a leading approach for accelerating language model (LM) inference, as it offers substantial speedups while yielding identical outputs. This relies upon a small draft model, tasked with predicting the outputs of the target model. State-of-the-art speculative decoding methods use a draft model consisting of a single decoder layer and output embedding matrix, with the latter dominating drafting time for the latest LMs. Recent work has sought to address this output distribution bottleneck by reducing the vocabulary of the draft model. Although this can improve throughput, it compromises speculation effectiveness when the target token is out-of-vocabulary. In this paper, we argue for vocabulary speculation as an alternative to a reduced vocabulary. We propose SpecVocab, an efficient and effective method that selects a vocabulary subset per decoding step. Across a variety of tasks, we demonstrate that SpecVocab can achieve a higher acceptance length than state-of-the-art speculative decoding approach, EAGLE-3. Notably, this yields up to an 8.1% increase in average throughput over EAGLE-3.

Miles Williams, Young D. Kwon, Rui Li, Alexandros Kouris, Stylianos I. Venieris• 2026

Related benchmarks

TaskDatasetResultRank
Speculative DecodingSpec-Bench
MT Score3.82
48
Language Model DecodingSpec-Bench
Conv. Acc267.6
11
Speculative Decoding ThroughputSpec-Bench
Throughput (Conv.)519.7
10
Speculative DecodingSpec-Bench OLMo 2 7B
Conversation Score5.12
5
Showing 4 of 4 rows

Other info

Follow for update