Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TABED: Test-Time Adaptive Ensemble Drafting for Robust Speculative Decoding in LVLMs

About

Speculative decoding (SD) has proven effective for accelerating LLM inference by quickly generating draft tokens and verifying them in parallel. However, SD remains largely unexplored for Large Vision-Language Models (LVLMs), which extend LLMs to process both image and text prompts. To address this gap, we benchmark existing inference methods with small draft models on 11 datasets across diverse input scenarios and observe scenario-specific performance fluctuations. Motivated by these findings, we propose Test-time Adaptive Batched Ensemble Drafting (TABED), which dynamically ensembles multiple drafts obtained via batch inference by leveraging deviations from past ground truths available in the SD setting. The dynamic ensemble method achieves an average robust walltime speedup of 1.74x over autoregressive decoding and a 5% improvement over single drafting methods, while remaining training-free and keeping ensembling costs negligible through parameter sharing. With its plug-and-play compatibility, we further enhance TABED by integrating advanced verification and alternative drafting methods. Code and custom-trained models are available at https://github.com/furiosa-ai/TABED.

Minjae Lee, Wonjun Kang, Byeongkeun Ahn, Christian Classen, Kevin Galim, Seunghyuk Oh, Minghao Yan, Hyung Il Koo, Kangwook Lee• 2026

Related benchmarks

TaskDatasetResultRank
Speculative DecodingBenchmark First Turn
Block Efficiency2.32
5
Speculative DecodingBenchmark Second Turn
Block Efficiency2.32
5
Speculative DecodingOOD
Block Efficiency2.13
5
Showing 3 of 3 rows

Other info

Follow for update