Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dream-VL & Dream-VLA: Open Vision-Language and Vision-Language-Action Models with Diffusion Language Model Backbone

About

While autoregressive Large Vision-Language Models (VLMs) have achieved remarkable success, their sequential generation often limits their efficacy in complex visual planning and dynamic robotic control. In this work, we investigate the potential of constructing Vision-Language Models upon diffusion-based large language models (dLLMs) to overcome these limitations. We introduce Dream-VL, an open diffusion-based VLM (dVLM) that achieves state-of-the-art performance among previous dVLMs. Dream-VL is comparable to top-tier AR-based VLMs trained on open data on various benchmarks but exhibits superior potential when applied to visual planning tasks. Building upon Dream-VL, we introduce Dream-VLA, a dLLM-based Vision-Language-Action model (dVLA) developed through continuous pre-training on open robotic datasets. We demonstrate that the natively bidirectional nature of this diffusion backbone serves as a superior foundation for VLA tasks, inherently suited for action chunking and parallel generation, leading to significantly faster convergence in downstream fine-tuning. Dream-VLA achieves top-tier performance of 97.2% average success rate on LIBERO, 71.4% overall average on SimplerEnv-Bridge, and 60.5% overall average on SimplerEnv-Fractal, surpassing leading models such as $\pi_0$ and GR00T-N1. We also validate that dVLMs surpass AR baselines on downstream tasks across different training objectives. We release both Dream-VL and Dream-VLA to facilitate further research in the community.

Jiacheng Ye, Shansan Gong, Jiahui Gao, Junming Fan, Shuang Wu, Wei Bi, Haoli Bai, Lifeng Shang, Lingpeng Kong• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement97.2
494
Mathematical ReasoningMathVista
Score64.5
322
Video UnderstandingVideoMME--
192
Document Visual Question AnsweringDocVQA
ANLS94.4
164
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)79.2
79
Multi-discipline Multimodal UnderstandingMMMU-Pro--
56
Video UnderstandingMLVU--
54
Mathematical ReasoningMathVerse--
39
Video UnderstandingSEED-Bench Video Understanding
Accuracy58.1
33
Document Visual Question AnsweringInfoVQA
ANLS81.4
32
Showing 10 of 20 rows

Other info

GitHub

Follow for update