Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation

About

The advancement of large Vision-Language-Action (VLA) models has significantly improved robotic manipulation in terms of language-guided task execution and generalization to unseen scenarios. While existing VLAs adapted from pretrained large Vision-Language-Models (VLM) have demonstrated promising generalizability, their task performance is still unsatisfactory as indicated by the low tasks success rates in different environments. In this paper, we present a new advanced VLA architecture derived from VLM. Unlike previous works that directly repurpose VLM for action prediction by simple action quantization, we propose a omponentized VLA architecture that has a specialized action module conditioned on VLM output. We systematically study the design of the action module and demonstrates the strong performance enhancement with diffusion action transformers for action sequence modeling, as well as their favorable scaling behaviors. We also conduct comprehensive experiments and ablation studies to evaluate the efficacy of our models with varied designs. The evaluation on 5 robot embodiments in simulation and real work shows that our model not only significantly surpasses existing VLAs in task performance and but also exhibits remarkable adaptation to new robots and generalization to unseen objects and backgrounds. It exceeds the average success rates of OpenVLA which has similar model size (7B) with ours by over 35% in simulated evaluation and 55% in real robot experiments. It also outperforms the large RT-2-X model (55B) by 18% absolute success rates in simulation. Code and models can be found on our project page (https://cogact.github.io/).

Qixiu Li, Yaobo Liang, Zeyu Wang, Lin Luo, Xi Chen, Mozheng Liao, Fangyun Wei, Yu Deng, Sicheng Xu, Yizhong Zhang, Xiaofan Wang, Bei Liu, Jianlong Fu, Jianmin Bao, Dong Chen, Yuanchun Shi, Jiaolong Yang, Baining Guo• 2024

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement93.5
700
Robotic ManipulationLIBERO
Spatial Success Rate96
314
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)71.7
79
Robot ManipulationSimplerEnv Google Robot tasks Variant Aggregation
Average Success Rate61.33
67
Robot ManipulationSimplerEnv Google Robot tasks Visual Matching
Pick Coke Can Success Rate91.3
62
Robot ManipulationSimplerEnv WidowX
Success Rate: Put Spoon on Towel71.7
58
Robotic ManipulationSIMPLER Visual Matching WidowX robot
Put Spoon on Towel Score71.7
51
Robotic ManipulationLIBERO v1 (test)
Average Success Rate93.2
46
Robot ManipulationSimplerEnv Google Robot Visual Matching
Pick Coke Can91.3
43
Robotic ManipulationSIMPLER Google Robot VA
Pick Up Coke Can Success Rate96
35
Showing 10 of 49 rows

Other info

Follow for update