Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robotic Control via Embodied Chain-of-Thought Reasoning

About

A key limitation of learned robot control policies is their inability to generalize outside their training data. Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their robustness and generalization ability. Yet, one of the most exciting capabilities of large vision-language models in other domains is their ability to reason iteratively through complex problems. Can that same capability be brought into robotics to allow policies to improve performance by reasoning about a given task before acting? Naive use of "chain-of-thought" (CoT) style prompting is significantly less effective with standard VLAs because of the relatively simple training examples that are available to them. Additionally, purely semantic reasoning about sub-tasks, as is common in regular CoT, is insufficient for robot policies that need to ground their reasoning in sensory observations and the robot state. To this end, we introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features like object bounding boxes and end effector positions, before predicting the robot action. We design a scalable pipeline for generating synthetic training data for ECoT on large robot datasets. We demonstrate, that ECoT increases the absolute success rate of OpenVLA, the current strongest open-source VLA policy, by 28% across challenging generalization tasks, without any additional robot training data. Additionally, ECoT makes it easier for humans to interpret a policy's failures and correct its behavior using natural language.

Micha{\l} Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, Sergey Levine• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMBench
Accuracy0.9
637
Multimodal UnderstandingMM-Vet
MM-Vet Score0.00e+0
531
Visual Question AnsweringChartQA
Accuracy0.00e+0
371
Multimodal UnderstandingMMStar
Accuracy19.1
324
Visual Question AnsweringAI2D
Accuracy0.00e+0
249
Visual Question AnsweringDocVQA
Accuracy2.2
162
Multimodal UnderstandingMMMU (val)--
152
Visual Question AnsweringInfoVQA
Accuracy0.00e+0
135
Robotic ManipulationSIMPLER Visual Matching WidowX robot
Put Spoon on Towel Score4
51
Multimodal UnderstandingMME Perception--
46
Showing 10 of 22 rows

Other info

Follow for update