Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Point-NeRF: Point-based Neural Radiance Fields

About

Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.

Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann• 2022

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisTanks&Temples (test)
PSNR29.61
239
Novel View SynthesisDTU
PSNR30.12
100
Novel View SynthesisNeRF Synthetic
PSNR33.31
92
Novel View SynthesisScanNet
PSNR30.32
58
Novel View SynthesisTanks&Temples
PSNR24.75
52
Novel View SynthesisNeRF-synthetic original (test)
PSNR33.3
25
Novel View SynthesisScanNet (test)
PSNR30.32
25
Novel View SynthesisNeRF Synthetic Blender (test)
Avg PSNR33.3
24
Novel View SynthesisScanNet (novel view)
PSNR28.99
15
Driving Scene ReconstructionKITTI-360
PSNR21.54
10
Showing 10 of 18 rows

Other info

Code

Follow for update