Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions
About
Probabilistic diffusion models have achieved state-of-the-art results for image synthesis, inpainting, and text-to-image tasks. However, they are still in the early stages of generating complex 3D shapes. This work proposes Diffusion-SDF, a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds. We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks. Neural SDFs are implicit functions and diffusing them amounts to learning the reversal of their neural network weights, which we solve using a custom modulation module. Extensive experiments show that our method is capable of both realistic unconditional generation and conditional generation from partial inputs. This work expands the domain of diffusion models from learning 2D, explicit representations, to 3D, implicit representations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Crown Generation | Dental Crown Design Dataset | CD-L2 (Premolar)0.228 | 11 | |
| Dental Crown Mesh Reconstruction | Dental Crown Dataset (test) | Medial Area Difference (mm²)5.37 | 7 | |
| Unconditional 3D Shape Generation | ShapeNet chairs | COV (CD)65.35 | 6 | |
| Scene Synthesis | Our dataset | FID38.7 | 5 | |
| Scene Synthesis | 3D-FRONT | FID35.6 | 5 | |
| Shape Generation | DeepFashion3D (test) | COV CD67.09 | 5 |