Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM

About

Vision-Language-Action (VLA) models are receiving increasing attention for their ability to enable robots to perform complex tasks by integrating visual context with linguistic commands. However, achieving efficient real-time performance remains challenging due to the high computational demands of existing models. To overcome this, we propose Dual Process VLA (DP-VLA), a hierarchical framework inspired by dual-process theory. DP-VLA utilizes a Large System 2 Model (L-Sys2) for complex reasoning and decision-making, while a Small System 1 Model (S-Sys1) handles real-time motor control and sensory processing. By leveraging Vision-Language Models (VLMs), the L-Sys2 operates at low frequencies, reducing computational overhead, while the S-Sys1 ensures fast and accurate task execution. Experimental results on the RoboCasa dataset demonstrate that DP-VLA achieves faster inference and higher task success rates, providing a scalable solution for advanced robotic applications.

ByungOk Han, Jaehong Kim, Jinhyeok Jang• 2024

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationLIBERO
Spatial Success Rate19.8
314
Kitchen manipulationRoboCasa 24 kitchen manipulation tasks
Average Success Rate57.3
12
Showing 2 of 2 rows

Other info

Follow for update