Point-NeRF: Point-based Neural Radiance Fields
About
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Novel View Synthesis | Tanks&Temples (test) | PSNR29.61 | 239 | |
| Novel View Synthesis | DTU | PSNR30.12 | 100 | |
| Novel View Synthesis | NeRF Synthetic | PSNR33.31 | 92 | |
| Novel View Synthesis | ScanNet | PSNR30.32 | 58 | |
| Novel View Synthesis | Tanks&Temples | PSNR24.75 | 52 | |
| Novel View Synthesis | NeRF-synthetic original (test) | PSNR33.3 | 25 | |
| Novel View Synthesis | ScanNet (test) | PSNR30.32 | 25 | |
| Novel View Synthesis | NeRF Synthetic Blender (test) | Avg PSNR33.3 | 24 | |
| Novel View Synthesis | ScanNet (novel view) | PSNR28.99 | 15 | |
| Driving Scene Reconstruction | KITTI-360 | PSNR21.54 | 10 |