Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dita: Scaling Diffusion Transformer for Generalist Vision-Language-Action Policy

About

While recent vision-language-action models trained on diverse robot datasets exhibit promising generalization capabilities with limited in-domain data, their reliance on compact action heads to predict discretized or continuous actions constrains adaptability to heterogeneous action spaces. We present Dita, a scalable framework that leverages Transformer architectures to directly denoise continuous action sequences through a unified multimodal diffusion process. Departing from prior methods that condition denoising on fused embeddings via shallow networks, Dita employs in-context conditioning -- enabling fine-grained alignment between denoised actions and raw visual tokens from historical observations. This design explicitly models action deltas and environmental nuances. By scaling the diffusion action denoiser alongside the Transformer's scalability, Dita effectively integrates cross-embodiment datasets across diverse camera perspectives, observation scenes, tasks, and action spaces. Such synergy enhances robustness against various variances and facilitates the successful execution of long-horizon tasks. Evaluations across extensive benchmarks demonstrate state-of-the-art or comparative performance in simulation. Notably, Dita achieves robust real-world adaptation to environmental variances and complex long-horizon tasks through 10-shot finetuning, using only third-person camera inputs. The architecture establishes a versatile, lightweight and open-source baseline for generalist robot policy learning. Project Page: https://robodita.github.io.

Zhi Hou, Tianyi Zhang, Yuwen Xiong, Haonan Duan, Hengjun Pu, Ronglei Tong, Chengyang Zhao, Xizhou Zhu, Yu Qiao, Jifeng Dai, Yuntao Chen• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement93.2
700
Robotic ManipulationLIBERO
Spatial Success Rate84.2
314
Long-horizon task completionCalvin ABC->D
Success Rate (1)94.5
67
Robot ManipulationCalvin ABC->D
Average Successful Length3.61
48
Robotic ManipulationLIBERO v1 (test)
Average Success Rate82.4
46
Sequential Robotic ManipulationCALVIN
Success Rate (1 task)94.5
45
Robotic ManipulationLIBERO 1.0 (test)
Long83.6
40
Robot ManipulationLIBERO simulation
Average Success Rate92.3
36
Robot ManipulationLIBERO
Spatial Success Rate84.2
30
Robot ManipulationLIBERO OpenVLA-OFT
LIBERO Spatial Success84.2
21
Showing 10 of 24 rows

Other info

Code

Follow for update