Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLaDA-VLA: Vision Language Diffusion Action Models

About

The rapid progress of auto-regressive vision-language models (VLMs) has inspired growing interest in vision-language-action models (VLA) for robotic manipulation. Recently, masked diffusion models, a paradigm distinct from autoregressive models, have begun to demonstrate competitive performance in text generation and multimodal applications, leading to the development of a series of diffusion-based VLMs (d-VLMs). However, leveraging such models for robot policy learning remains largely unexplored. In this work, we present LLaDA-VLA, the first Vision-Language-Diffusion-Action model built upon pretrained d-VLMs for robotic manipulation. To effectively adapt d-VLMs to robotic domain, we introduce two key designs: (1) a localized special-token classification strategy that replaces full-vocabulary classification with special action token classification, reducing adaptation difficulty; (2) a hierarchical action-structured decoding strategy that decodes action sequences hierarchically considering the dependencies within and across actions. Extensive experiments demonstrate that LLaDA-VLA significantly outperforms state-of-the-art VLAs on both simulation and real-world robots.

Yuqing Wen, Hebei Li, Kefan Gu, Yucheng Zhao, Tiancai Wang, Xiaoyan Sun• 2025

Related benchmarks

TaskDatasetResultRank
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate95.6
127
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)56.9
79
Sequential Robotic ManipulationCALVIN
Success Rate (1 task)95.6
45
Robot ManipulationSimplerEnv WidowX (test)
Task Success Rate (Spoon on Towel)56.9
12
Showing 4 of 4 rows

Other info

Follow for update