Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM

About

Vision-language models (VLMs) predominantly rely on autoregressive decoding, which generates tokens one at a time and fundamentally limits inference throughput. This limitation is especially acute in physical AI scenarios such as robotics and autonomous driving, where VLMs are deployed on edge devices at batch size one, making AR decoding memory-bandwidth-bound and leaving hardware parallelism underutilized. While block-wise discrete diffusion has shown promise for parallel text generation, extending it to VLMs remains challenging due to the need to jointly handle continuous visual representations and discrete text tokens while preserving pretrained multimodal capabilities. We present Fast-dVLM, a block-diffusion-based VLM that enables KV-cache-compatible parallel decoding and speculative block decoding for inference acceleration. We systematically compare two AR-to-diffusion conversion strategies: a two-stage approach that first adapts the LLM backbone with text-only diffusion fine-tuning before multimodal training, and a direct approach that converts the full AR VLM in one stage. Under comparable training budgets, direct conversion proves substantially more efficient by leveraging the already multimodally aligned VLM; we therefore adopt it as our recommended recipe. We introduce a suite of multimodal diffusion adaptations, block size annealing, causal context attention, auto-truncation masking, and vision efficient concatenation, that collectively enable effective block diffusion in the VLM setting. Extensive experiments across 11 multimodal benchmarks show Fast-dVLM matches its autoregressive counterpart in generation quality. With SGLang integration and FP8 quantization, Fast-dVLM achieves over 6x end-to-end inference speedup over the AR baseline.

Chengyue Wu, Shiyi Lan, Yonggan Fu, Sensen Gao, Jin Wang, Jincheng Yu, Jose M. Alvarez, Pavlo Molchanov, Ping Luo, Song Han, Ligeng Zhu, Enze Xie• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Document Visual Question AnsweringDocVQA--
263
Multimodal UnderstandingMMMU
MMMU Score46.6
69
Chart Question AnsweringChartQA
Score83.1
20
Visual Question AnsweringGQA
Score63.3
17
Short-answer Visual Question AnsweringTextVQA
Accuracy79.3
9
Short-answer Visual Question AnsweringRealworldQA
Accuracy65.1
9
Long-answer Visual Question AnsweringMMMU-Pro (V)
Accuracy24.6
9
Short-answer Visual Question AnsweringAI2D
Score79.7
9
Short-answer Visual Question AnsweringSEEDBench2 Plus
Accuracy67.2
9
Showing 10 of 11 rows

Other info

Follow for update