Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PACER: Blockwise Pre-verification for Speculative Decoding with Adaptive Length

About

Speculative decoding (SD) is a powerful technique for accelerating the inference process of large language models (LLMs) without sacrificing accuracy. Typically, SD employs a small draft model to generate a fixed number of draft tokens, which are then verified in parallel by the target model. However, our experiments reveal that the optimal draft length varies significantly across different decoding steps. This variation suggests that using a fixed draft length limits the potential for further improvements in decoding speed. To address this challenge, we propose Pacer, a novel approach that dynamically controls draft length using a lightweight, trainable pre-verification layer. This layer pre-verifies draft tokens blockwise before they are sent to the target model, allowing the draft model to stop token generation if the blockwise pre-verification fails. We implement Pacer on multiple SD model pairs and evaluate its performance across various benchmarks. Our results demonstrate that Pacer achieves up to 2.66x Speedup over autoregressive decoding and consistently outperforms standard speculative decoding. Furthermore, when integrated with Ouroboros, Pacer attains up to 3.09x Speedup.

Situo Zhang, Yifan Zhang, Zichen Zhu, Hankun Wang, Da Ma, Danyang Zhang, Lu Chen, Kai Yu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Speed Up (x)3.09
177
Speculative DecodingSpec-Bench
MT Score18.53
48
Code GenerationHumanEval
Average Tau (τ)7.46
45
SummarizationCNN/DM
Speedup1.71
32
Code GenerationMBPP
Tokens/s32.93
18
Mathematical ReasoningGSM8K
Throughput (tokens/s)39.69
18
Showing 6 of 6 rows

Other info

Follow for update