Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GraspSplats: Efficient Manipulation with 3D Feature Splatting

About

The ability for robots to perform efficient and zero-shot grasping of object parts is crucial for practical applications and is becoming prevalent with recent advances in Vision-Language Models (VLMs). To bridge the 2D-to-3D gap for representations to support such a capability, existing methods rely on neural fields (NeRFs) via differentiable rendering or point-based projection methods. However, we demonstrate that NeRFs are inappropriate for scene changes due to their implicitness and point-based methods are inaccurate for part localization without rendering-based optimization. To amend these issues, we propose GraspSplats. Using depth supervision and a novel reference feature computation method, GraspSplats generates high-quality scene representations in under 60 seconds. We further validate the advantages of Gaussian-based representation by showing that the explicit and optimized geometry in GraspSplats is sufficient to natively support (1) real-time grasp sampling and (2) dynamic and articulated object manipulation with point trackers. With extensive experiments on a Franka robot, we demonstrate that GraspSplats significantly outperforms existing methods under diverse task settings. In particular, GraspSplats outperforms NeRF-based methods like F3RM and LERF-TOGO, and 2D detection methods.

Mazeyu Ji, Ri-Zhao Qiu, Xueyan Zou, Xiaolong Wang• 2024

Related benchmarks

TaskDatasetResultRank
Context-aware Robotic GraspingReal-world context-aware grasping scenes
Object Success73
8
Language-grounded 3D semantic segmentationGraspClutter6D
3D IoU17.38
4
Language-grounded 3D semantic segmentationSynthetic
3D IoU20.7
4
2D Part LocalizationSpray Bottle Pink Button
MAE0.0028
2
2D Part LocalizationHammer Handle
MAE0.0141
2
Showing 5 of 5 rows

Other info

Follow for update