Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Tokens: Semantic-Aware Speculative Decoding for Efficient Inference by Probing Internal States

About

Large Language Models (LLMs) achieve strong performance across many tasks but suffer from high inference latency due to autoregressive decoding. The issue is exacerbated in Large Reasoning Models (LRMs), which generate lengthy chains of thought. While speculative decoding accelerates inference by drafting and verifying multiple tokens in parallel, existing methods operate at the token level and ignore semantic equivalence (i.e., different token sequences expressing the same meaning), leading to inefficient rejections. We propose SemanticSpec, a semantic-aware speculative decoding framework that verifies entire semantic sequences instead of tokens. SemanticSpec introduces a semantic probability estimation mechanism that probes the model's internal hidden states to assess the likelihood of generating sequences with specific meanings. Experiments on four benchmarks show that SemanticSpec achieves up to 2.7x speedup on DeepSeekR1-32B and 2.1x on QwQ-32B, consistently outperforming token-level and sequence-level baselines in both efficiency and effectiveness.

Ximing Dong, Shaowei Wang, Dayi Lin, Boyuan Chen, Ahmed E. Hassan• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC 23
Pass@189.2
12
Mathematical ReasoningMATH 500
Pass@1 Acc92.17
12
Science ReasoningGPQA D
Pass@157.25
12
Mathematical ReasoningAIME 24
Pass@161
12
Showing 4 of 4 rows

Other info

Follow for update