GraspSplats: Efficient Manipulation with 3D Feature Splatting
About
The ability for robots to perform efficient and zero-shot grasping of object parts is crucial for practical applications and is becoming prevalent with recent advances in Vision-Language Models (VLMs). To bridge the 2D-to-3D gap for representations to support such a capability, existing methods rely on neural fields (NeRFs) via differentiable rendering or point-based projection methods. However, we demonstrate that NeRFs are inappropriate for scene changes due to their implicitness and point-based methods are inaccurate for part localization without rendering-based optimization. To amend these issues, we propose GraspSplats. Using depth supervision and a novel reference feature computation method, GraspSplats generates high-quality scene representations in under 60 seconds. We further validate the advantages of Gaussian-based representation by showing that the explicit and optimized geometry in GraspSplats is sufficient to natively support (1) real-time grasp sampling and (2) dynamic and articulated object manipulation with point trackers. With extensive experiments on a Franka robot, we demonstrate that GraspSplats significantly outperforms existing methods under diverse task settings. In particular, GraspSplats outperforms NeRF-based methods like F3RM and LERF-TOGO, and 2D detection methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Context-aware Robotic Grasping | Real-world context-aware grasping scenes | Object Success73 | 8 | |
| Language-grounded 3D semantic segmentation | GraspClutter6D | 3D IoU17.38 | 4 | |
| Language-grounded 3D semantic segmentation | Synthetic | 3D IoU20.7 | 4 | |
| 2D Part Localization | Spray Bottle Pink Button | MAE0.0028 | 2 | |
| 2D Part Localization | Hammer Handle | MAE0.0141 | 2 |