Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ChatVLA: Unified Multimodal Understanding and Robot Control with Vision-Language-Action Model

About

Humans possess a unified cognitive ability to perceive, comprehend, and interact with the physical world. Why can't large language models replicate this holistic understanding? Through a systematic analysis of existing training paradigms in vision-language-action models (VLA), we identify two key challenges: spurious forgetting, where robot training overwrites crucial visual-text alignments, and task interference, where competing control and understanding tasks degrade performance when trained jointly. To overcome these limitations, we propose ChatVLA, a novel framework featuring Phased Alignment Training, which incrementally integrates multimodal data after initial control mastery, and a Mixture-of-Experts architecture to minimize task interference. ChatVLA demonstrates competitive performance on visual question-answering datasets and significantly surpasses state-of-the-art vision-language-action (VLA) methods on multimodal understanding benchmarks. Notably, it achieves a six times higher performance on MMMU and scores 47.2% on MMStar with a more parameter-efficient design than ECoT. Furthermore, ChatVLA demonstrates superior performance on 25 real-world robot manipulation tasks compared to existing VLA methods like OpenVLA. Our findings highlight the potential of our unified framework for achieving both robust multimodal understanding and effective robot control.

Zhongyi Zhou, Yichen Zhu, Minjie Zhu, Junjie Wen, Ning Liu, Zhiyuan Xu, Weibin Meng, Ran Cheng, Yaxin Peng, Chaomin Shen, Feifei Feng• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench
Accuracy69
637
Visual Question AnsweringChartQA
Accuracy59.9
371
Multimodal UnderstandingMMStar
Accuracy47.2
324
Visual Question AnsweringAI2D
Accuracy67.6
249
Robot ManipulationLIBERO (test)
Average Success Rate95.2
184
Visual Question AnsweringDocVQA
Accuracy83.3
162
Multimodal UnderstandingMMMU (val)--
152
Visual Question AnsweringInfoVQA
Accuracy53.3
135
Robotic ManipulationCalvin ABCD→D
Avg Length3.8
89
Multimodal UnderstandingMME Perception--
46
Showing 10 of 16 rows

Other info

Follow for update