Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Point Cloud Reconstruction

About

Point cloud obtained from 3D scanning is often sparse, noisy, and irregular. To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud. In this paper, we advocate that jointly solving these tasks leads to significant improvement for point cloud reconstruction. To this end, we propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points. In particular, we further improve the performance of transformer by a newly proposed module called amplified positional encoding. This module has been designed to differently amplify the magnitude of positional encoding vectors based on the points' distances for adaptive refinements. Extensive experiments demonstrate that our network achieves state-of-the-art performance among the recent studies in the ScanNet, ICL-NUIM, and ShapeNetPart datasets. Moreover, we underline the ability of our network to generalize toward real-world and unmet scenes.

Jaesung Choe, Byeongin Joung, Francois Rameau, Jaesik Park, In So Kweon• 2021

Related benchmarks

TaskDatasetResultRank
Point Cloud ReconstructionShapeNet Part 87 (test)
Chamfer Distance1.19
5
Point Cloud ReconstructionScanNet 9 (test)
CD2.86
5
Point Cloud ReconstructionICL-NUIM 20 (test)
CD2.78
5
Showing 3 of 3 rows

Other info

Follow for update