NASA: Neural Articulated Shape Approximation
About
Efficient representation of articulated objects such as human bodies is an important problem in computer vision and graphics. To efficiently simulate deformation, existing approaches represent 3D objects using polygonal meshes and deform them using skinning techniques. This paper introduces neural articulated shape approximation (NASA), an alternative framework that enables efficient representation of articulated deformable objects using neural indicator functions that are conditioned on pose. Occupancy testing using NASA is straightforward, circumventing the complexity of meshes and the issue of water-tightness. We demonstrate the effectiveness of NASA for 3D tracking applications, and discuss other potential extensions.
Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard Pons-Moll, Geoffrey Hinton, Mohammad Norouzi, Andrea Tagliasacchi• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Articulated Mesh Reconstruction | DFaust Within Distribution 6 (train/test) | IoU (Bbox)97.94 | 30 | |
| Articulated Mesh Reconstruction | DFaust Out of Distribution 6 (unseen pose) | IoU BBox87.71 | 30 | |
| 3D Hand Shape and Color Reconstruction | DeepHandMesh (test) | V2V Distance5.04 | 17 | |
| Color reconstruction | InterHand2.6M | PSNR28.44 | 15 | |
| Shape reconstruction from point clouds | 3DH 62 (test) | V2V Distance (mm)3.05 | 14 | |
| Shape reconstruction from point clouds | MANO 53 (test) | V2V Error (mm)2.57 | 14 | |
| Pose-aware shape modeling | CAPE Extrapolation (test) | Pl0.343 | 3 | |
| Pose-aware shape modeling | CAPE Interpolation (test) | Ds2m1.12 | 3 |
Showing 8 of 8 rows