Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LaViDa: A Large Diffusion Language Model for Multimodal Understanding

About

Modern Vision-Language Models (VLMs) can solve a wide range of tasks requiring visual reasoning. In real-world scenarios, desirable properties for VLMs include fast inference and controllable generation (e.g., constraining outputs to adhere to a desired format). However, existing autoregressive (AR) VLMs like LLaVA struggle in these aspects. Discrete diffusion models (DMs) offer a promising alternative, enabling parallel decoding for faster inference and bidirectional context for controllable generation through text-infilling. While effective in language-only settings, DMs' potential for multimodal tasks is underexplored. We introduce LaViDa, a family of VLMs built on DMs. We build LaViDa by equipping DMs with a vision encoder and jointly fine-tune the combined parts for multimodal instruction following. To address challenges encountered, LaViDa incorporates novel techniques such as complementary masking for effective training, prefix KV cache for efficient inference, and timestep shifting for high-quality sampling. Experiments show that LaViDa achieves competitive or superior performance to AR VLMs on multi-modal benchmarks such as MMMU, while offering unique advantages of DMs, including flexible speed-quality tradeoff, controllability, and bidirectional reasoning. On COCO captioning, LaViDa surpasses Open-LLaVa-Next-8B by +4.1 CIDEr with 1.92x speedup. On bidirectional tasks, it achieves +59% improvement on Constrained Poem Completion. These results demonstrate LaViDa as a strong alternative to AR VLMs. Code and models will be released in the camera-ready version.

Shufan Li, Konstantinos Kallidromitis, Hritik Bansal, Akash Gokul, Yusuke Kato, Kazuki Kozuka, Jason Kuen, Zhe Lin, Kai-Wei Chang, Aditya Grover• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringChartQA
Accuracy64.6
239
Visual Mathematical ReasoningMathVista
Accuracy44.8
189
Visual Question AnsweringAI2D
Accuracy70
174
Chart Question AnsweringChartQA (test)
Accuracy64.6
129
Multimodal UnderstandingMMMU (val)--
111
Diagram Question AnsweringAI2D (test)
Accuracy70
103
Multimodal UnderstandingSEED-Bench Image
Accuracy66.5
82
Visual Mathematical ReasoningMathVerse
Accuracy27.2
73
Multimodal UnderstandingMMBench en (dev)
Score70.5
38
Multimodal UnderstandingMME Perception--
33
Showing 10 of 13 rows

Other info

Follow for update