Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model

About

Humans possess a unified cognitive ability to perceive, comprehend, and interact with the physical world. Why can't large language models replicate this holistic understanding? Through a systematic analysis of existing training paradigms in vision-language-action models (VLA), we identify two key challenges: spurious forgetting, where robot training overwrites crucial visual-text alignments, and task interference, where competing control and understanding tasks degrade performance when trained jointly. To overcome these limitations, we propose ChatVLA, a novel framework featuring Phased Alignment Training, which incrementally integrates multimodal data after initial control mastery, and a Mixture-of-Experts architecture to minimize task interference. ChatVLA demonstrates competitive performance on visual question-answering datasets and significantly surpasses state-of-the-art vision-language-action (VLA) methods on multimodal understanding benchmarks. Notably, it achieves a six times higher performance on MMMU and scores 47.2% on MMStar with a more parameter-efficient design than ECoT. Furthermore, ChatVLA demonstrates superior performance on 25 real-world robot manipulation tasks compared to existing VLA methods like OpenVLA. Our findings highlight the potential of our unified framework for achieving both robust multimodal understanding and effective robot control.

Zhongyi Zhou, Yichen Zhu, Minjie Zhu, Junjie Wen, Ning Liu, Zhiyuan Xu, Weibin Meng, Ran Cheng, Yaxin Peng, Chaomin Shen, Feifei Feng• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO (test)
Average Success Rate95.2
142
Robotic ManipulationCalvin ABCD→D
Success Rate (1 Inst)95.5
26
Robotic ManipulationLIBERO (test)
Object Success Rate96.8
14
Robot ManipulationLIBERO (All four suites (combined))
Spatial Success Rate95.2
12
Showing 4 of 4 rows

Other info

Follow for update