Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RoboGround: Robotic Manipulation with Grounded Vision-Language Priors

About

Recent advancements in robotic manipulation have highlighted the potential of intermediate representations for improving policy generalization. In this work, we explore grounding masks as an effective intermediate representation, balancing two key advantages: (1) effective spatial guidance that specifies target objects and placement areas while also conveying information about object shape and size, and (2) broad generalization potential driven by large-scale vision-language models pretrained on diverse grounding datasets. We introduce RoboGround, a grounding-aware robotic manipulation system that leverages grounding masks as an intermediate representation to guide policy networks in object manipulation tasks. To further explore and enhance generalization, we propose an automated pipeline for generating large-scale, simulated data with a diverse set of objects and instructions. Extensive experiments show the value of our dataset and the effectiveness of grounding masks as intermediate guidance, significantly enhancing the generalization abilities of robot policies.

Haifeng Huang, Xinyi Chen, Yilun Chen, Hao Li, Xiaoshen Han, Zehan Wang, Tai Wang, Jiangmiao Pang, Zhou Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Open/CloseRoboCasa
Success Rate72
4
Pick-&-PlaceRoboCasa Easy
Contact Rate89
4
Pick-&-PlaceRoboCasa Appearance
Contact Rate0.785
4
Pick-&-PlaceRoboCasa Spatial
Contact Rate81
4
Pick-&-PlaceRoboCasa Common-sense
Contact Rate0.763
4
PressRoboCasa
Success Rate69.3
4
Turn/TwistRoboCasa
Success Rate54.5
4
Showing 7 of 7 rows

Other info

Code

Follow for update