Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DextER: Language-driven Dexterous Grasp Generation with Embodied Reasoning

About

Language-driven dexterous grasp generation requires the models to understand task semantics, 3D geometry, and complex hand-object interactions. While vision-language models have been applied to this problem, existing approaches directly map observations to grasp parameters without intermediate reasoning about physical interactions. We present DextER, Dexterous Grasp Generation with Embodied Reasoning, which introduces contact-based embodied reasoning for multi-finger manipulation. Our key insight is that predicting which hand links contact where on the object surface provides an embodiment-aware intermediate representation bridging task semantics with physical constraints. DextER autoregressively generates embodied contact tokens specifying which finger links contact where on the object surface, followed by grasp tokens encoding the hand configuration. On DexGYS, DextER achieves 67.14% success rate, outperforming state-of-the-art by 3.83%p with 96.4% improvement in intention alignment. We also demonstrate steerable generation through partial contact specification, providing fine-grained control over grasp synthesis.

Junha Lee, Eunha Park, Minsu Cho• 2026

Related benchmarks

TaskDatasetResultRank
Dexterous Grasp GenerationDexonomy Unseen Obj.
P-FID0.14
14
Dexterous Grasp GenerationDexGYSNet
P-FID0.2
12
Dexterous Grasp GenerationDexonomy Seen Obj. & Grasp
P-FID0.12
7
Dexterous Grasp GenerationDexonomy Unseen Grasp
P-FID0.83
7
Showing 4 of 4 rows

Other info

Follow for update