Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ViSpec: Accelerating Vision-Language Models with Vision-Aware Speculative Decoding

About

Speculative decoding is a widely adopted technique for accelerating inference in large language models (LLMs), yet its application to vision-language models (VLMs) remains underexplored, with existing methods achieving only modest speedups (<1.5x). This gap is increasingly significant as multimodal capabilities become central to large-scale models. We hypothesize that large VLMs can effectively filter redundant image information layer by layer without compromising textual comprehension, whereas smaller draft models struggle to do so. To address this, we introduce Vision-Aware Speculative Decoding (ViSpec), a novel framework tailored for VLMs. ViSpec employs a lightweight vision adaptor module to compress image tokens into a compact representation, which is seamlessly integrated into the draft model's attention mechanism while preserving original image positional information. Additionally, we extract a global feature vector for each input image and augment all subsequent text tokens with this feature to enhance multimodal coherence. To overcome the scarcity of multimodal datasets with long assistant responses, we curate a specialized training dataset by repurposing existing datasets and generating extended outputs using the target VLM with modified prompts. Our training strategy mitigates the risk of the draft model exploiting direct access to the target model's hidden states, which could otherwise lead to shortcut learning when training solely on target model outputs. Extensive experiments validate ViSpec, achieving, to our knowledge, the first substantial speedup in VLM speculative decoding. Code is available at https://github.com/KangJialiang/ViSpec.

Jialiang Kang, Han Shu, Wenshuo Li, Yingjie Zhai, Xinghao Chen• 2025

Related benchmarks

TaskDatasetResultRank
Image CaptioningCOCO Captions
Average Accepted Length (tau)4
10
Multimodal Question AnsweringScienceQA (SQA)
Avg Accepted Length3.78
10
Multimodal UnderstandingMME
Avg Accepted Length (tau)3.62
10
Multimodal UnderstandingMM-Vet
Average Accepted Length (tau)3.72
10
Speculative DecodingVideoDetailCaption ~17k visual tokens
Tau (τ)3.52
8
Speculative DecodingMVBench
Tau (τ)3.6
8
Speculative DecodingLongVideoBench ~15k visual tokens
Tau (τ)3.55
8
Showing 7 of 7 rows

Other info

Follow for update