Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Grasp as You Dream: Imitating Functional Grasping from Generated Human Demonstrations

About

Building generalist robots capable of performing functional grasping in everyday, open-world environments remains a significant challenge due to the vast diversity of objects and tasks. Existing methods are either constrained to narrow object/task sets or rely on prohibitively large-scale data collection to capture real-world variability. In this work, we present an alternative approach, GraspDreamer, a method that leverages human demonstrations synthesized by visual generative models (VGMs) (e.g., video generation models) to enable zero-shot functional grasping without labor-intensive data collection. The key idea is that VGMs pre-trained on internet-scale human data implicitly encode generalized priors about how humans interact with the physical world, which can be combined with embodiment-specific action optimization to enable functional grasping with minimal effort. Extensive experiments on the public benchmarks with different robot hands demonstrate the superior data efficiency and generalization performance of GraspDreamer compared to previous methods. Real-world evaluations further validate the effectiveness on real robots. Additionally, we showcase that GraspDreamer can (1) be naturally extended to downstream manipulation tasks, and (2) can generate data to support visuomotor policy learning.

Chao Tang, Jiacheng Xu, Haofei Lu, Bolin Zou, Wenlong Dong, Hong Zhang, Danica Kragic• 2026

Related benchmarks

TaskDatasetResultRank
Functional GraspingTaskGrasp Object Generalization
Success Rate78.6
5
Functional GraspingTaskGrasp Task Generalization
Success Rate79.5
5
Dexterous Functional GraspingDexGraspNet Kitchenware
Success Rate80.2
4
Dexterous Functional GraspingDexGraspNet Mechanic Tool
Success Rate83.1
4
Showing 4 of 4 rows

Other info

Follow for update