AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation
About
We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Single-view Reconstruction | ShapeNet | pla2.54 | 20 | |
| 3D Shape Reconstruction | ShapeNet Core v2 (val) | CD1.51 | 8 | |
| Surface Reconstruction | Famous 22 shapes (test) | Chamfer Distance (no-n.)4.69 | 8 | |
| Surface Reconstruction | Thingi10k 100 shapes (test) | CD x 100 (no-n.)5.29 | 8 | |
| 3D Shape Reconstruction | ABC dataset (test) | -- | 8 | |
| Single-view Reconstruction | ShapeNet (test) | Chamfer Distance9.52 | 6 | |
| 3D Reconstruction | FAUST (val) | Chamfer Distance15.47 | 3 |