Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Geometry Matching for Multi-Embodiment Grasping

About

Many existing learning-based grasping approaches concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. We tackle the problem of grasping using multiple embodiments by learning rich geometric representations for both objects and end-effectors using Graph Neural Networks. Our novel method - GeoMatch - applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive predictions of grasps keypoint-by-keypoint. We compare our method against baselines that support multiple embodiments. Our approach performs better across three end-effectors, while also producing diverse grasps. Examples, including real robot demos, can be found at geo-match.github.io.

Maria Attarian, Muhammad Adil Asif, Jingzhou Liu, Ruthrash Hari, Animesh Garg, Igor Gilitschenski, Jonathan Tompson• 2023

Related benchmarks

TaskDatasetResultRank
Cross-Embodiment Dexterous Grasp GenerationMultiDex
Success Rate (Barrett)60
7
Showing 1 of 1 rows

Other info

Follow for update