Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoE-Spec: Expert Budgeting for Efficient Speculative Decoding

About

Speculative decoding accelerates Large Language Model (LLM) inference by verifying multiple drafted tokens in parallel. However, for Mixture-of-Experts (MoE) models, this parallelism introduces a severe bottleneck: large draft trees activate many unique experts, significantly increasing memory pressure and diminishing speedups from speculative decoding relative to autoregressive decoding. Prior methods reduce speculation depth when MoE verification becomes expensive. We propose MoE-Spec, a training-free verification-time expert budgeting method that decouples speculation depth from memory cost by enforcing a fixed expert capacity limit at each layer, loading only the experts that contribute most to verification and dropping the long tail of rarely used experts that drive bandwidth overhead. Experiments across multiple model scales and datasets show that this method yields 10--30\% higher throughput than state-of-the-art speculative decoding baselines (EAGLE-3) at comparable quality, with flexibility to trade accuracy for further latency reductions through tighter budgets.

Bradley McDanel, Steven Li, Sruthikesh Surineni, Harshit Khaitan• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Speed Up (x)2.4
177
Code GenerationHumanEval
Speed (Spd)2.4
18
Code GenerationMBPP
Speed2.3
18
SummarizationCNN/DM
Spd Score1.9
18
Mathematical ReasoningMATH 500
Speed1.6
18
Showing 5 of 5 rows

Other info

Follow for update