Dynamic Plane Convolutional Occupancy Networks
About
Learning-based 3D reconstruction using implicit neural representations has shown promising progress not only at the object level but also in more complicated scenes. In this paper, we propose Dynamic Plane Convolutional Occupancy Networks, a novel implicit representation pushing further the quality of 3D surface reconstruction. The input noisy point clouds are encoded into per-point features that are projected onto multiple 2D dynamic planes. A fully-connected network learns to predict plane parameters that best describe the shapes of objects or scenes. To further exploit translational equivariance, convolutional neural networks are applied to process the plane features. Our method shows superior performance in surface reconstruction from unoriented point clouds in ShapeNet as well as an indoor scene dataset. Moreover, we also provide interesting observations on the distribution of learned dynamic planes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Reconstruction | ShapeNet (test) | -- | 74 | |
| Object-level 3D Reconstruction | ShapeNet 13 classes (test) | Chamfer-L1 Distance0.042 | 21 | |
| Scene-level reconstruction | synthetic indoor scene dataset | IoU83.7 | 14 | |
| Scene Reconstruction | Synthetic Rooms (test) | CD10.42 | 7 | |
| Scene-level 3D Reconstruction | Synthetic Room Dataset 10K points with noise | Chamfer-L10.42 | 7 | |
| Surface Reconstruction | Synthetic Rooms 10k noisy points (test) | Chamfer Distance (CD)0.42 | 6 |