CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation
About
The advancement of large Vision-Language-Action (VLA) models has significantly improved robotic manipulation in terms of language-guided task execution and generalization to unseen scenarios. While existing VLAs adapted from pretrained large Vision-Language-Models (VLM) have demonstrated promising generalizability, their task performance is still unsatisfactory as indicated by the low tasks success rates in different environments. In this paper, we present a new advanced VLA architecture derived from VLM. Unlike previous works that directly repurpose VLM for action prediction by simple action quantization, we propose a omponentized VLA architecture that has a specialized action module conditioned on VLM output. We systematically study the design of the action module and demonstrates the strong performance enhancement with diffusion action transformers for action sequence modeling, as well as their favorable scaling behaviors. We also conduct comprehensive experiments and ablation studies to evaluate the efficacy of our models with varied designs. The evaluation on 5 robot embodiments in simulation and real work shows that our model not only significantly surpasses existing VLAs in task performance and but also exhibits remarkable adaptation to new robots and generalization to unseen objects and backgrounds. It exceeds the average success rates of OpenVLA which has similar model size (7B) with ours by over 35% in simulated evaluation and 55% in real robot experiments. It also outperforms the large RT-2-X model (55B) by 18% absolute success rates in simulation. Code and models can be found on our project page (https://cogact.github.io/).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Manipulation | LIBERO | Goal Achievement90.2 | 494 | |
| Robot Manipulation | SimplerEnv WidowX Robot tasks (test) | Success Rate (Spoon)71.7 | 79 | |
| Robot Manipulation | SimplerEnv Google Robot tasks Visual Matching | Pick Coke Can Success Rate91.3 | 62 | |
| Robot Manipulation | SimplerEnv Google Robot tasks Variant Aggregation | Pick Coke Can Success Rate89.6 | 44 | |
| Robotic Manipulation | LIBERO v1 (test) | Config 10 Score88.8 | 27 | |
| Robotic Manipulation | SIMPLER Google Robot Visual Matching | PickCan Success Rate91.3 | 24 | |
| Robotic Manipulation | SIMPLER Visual Matching WidowX robot | Put Spoon on Towel Score71.7 | 24 | |
| Robotic Manipulation | SIMPLER Google Robot VA | Pick Up Coke Can Success Rate89.6 | 20 | |
| Robot Manipulation | SimplerEnv OOD | Put Spoon on Towel Success Rate71.7 | 19 | |
| Robotic Manipulation | SimplerEnv | Success Rate: Spoon on Towel75 | 14 |