Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Baking Neural Radiance Fields for Real-Time View Synthesis

About

Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture, and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.

Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec• 2021

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisNeRF Synthetic
PSNR33
92
Novel View SynthesisSynthetic-NeRF (test)
PSNR30.38
48
Novel View SynthesisNeRF-synthetic original (test)
PSNR30.38
25
Novel View SynthesisSynthetic 360° scenes Blender (test)
Chair Score1.25e+3
21
Novel View SynthesisLLFF Forward-facing (test)
PSNR25.63
20
Novel View SynthesisForward-facing scenes
Room Performance Score3.59e+3
19
Novel View SynthesisRealistic Synthetic 360--
15
Novel View SynthesisSynthetic-NeRF v1 (test)
PSNR30.38
12
Novel View SynthesisSynthetic 360
FPS207.3
11
Novel View SynthesisLLFF Forward-facing
FPS50.71
11
Showing 10 of 17 rows

Other info

Follow for update