Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
About
Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Compression Decoding | Videos 1080p | FPS0.1995 | 8 | |
| Surface Reconstruction | 3D Scene dataset | CDL20.0016 | 6 | |
| Shape Reconstruction | ShapeNet novel categories | Cabinet9.74 | 4 | |
| 3D Shape Reconstruction | ShapeNet Seen Categories (novel instances) | CD (chair)7.7 | 4 |