MaskHand: Generative Masked Modeling for Robust Hand Mesh Reconstruction in the Wild
About
Reconstructing a 3D hand mesh from a single RGB image is challenging due to complex articulations, self-occlusions, and depth ambiguities. Traditional discriminative methods, which learn a deterministic mapping from a 2D image to a single 3D mesh, often struggle with the inherent ambiguities in 2D-to-3D mapping. To address this challenge, we propose MaskHand, a novel generative masked model for hand mesh recovery that synthesizes plausible 3D hand meshes by learning and sampling from the probabilistic distribution of the ambiguous 2D-to-3D mapping process. MaskHand consists of two key components: (1) a VQ-MANO, which encodes 3D hand articulations as discrete pose tokens in a latent space, and (2) a Context-Guided Masked Transformer that randomly masks out pose tokens and learns their joint distribution, conditioned on corrupted token sequence, image context, and 2D pose cues. This learned distribution facilitates confidence-guided sampling during inference, producing mesh reconstructions with low uncertainty and high precision. Extensive evaluations on benchmark and real-world datasets demonstrate that MaskHand achieves state-of-the-art accuracy, robustness, and realism in 3D hand mesh reconstruction. Project website: https://m-usamasaleem.github.io/publication/MaskHand/MaskHand.html.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Hand Reconstruction | FreiHAND (test) | F@15mm99.1 | 148 | |
| Hand Pose Estimation | HInt New Days v1 (test) | PCK @ 0.0561 | 32 | |
| Hand Pose Estimation | HInt - VISOR v1 (test) | PCK @ 0.0562.1 | 32 | |
| Hand Pose Estimation | Ego4D HInt v1 (test) | PCK @ 0.0559.3 | 32 | |
| 3D Hand Reconstruction | DexYCB (test) | MPVPE11.2 | 28 | |
| Occluded Hand Joint Reconstruction | HInt Benchmark v1 (test) | NewDays PCK@0.0529.4 | 11 | |
| 3D Mesh Reconstruction | HO3D v3 | PA-MPJPE7 | 9 | |
| Hand Pose Estimation | FreiHAND (test) | PA-MPVPE5.4 | 7 |