Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReconFusion: 3D Reconstruction with Diffusion Priors

About

3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes. However, recovering a high-quality NeRF typically requires tens to hundreds of input images, resulting in a time-consuming capture process. We present ReconFusion to reconstruct real-world scenes using only a few photos. Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel camera poses beyond those captured by the set of input images. Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions. We perform an extensive evaluation across various real-world datasets, including forward-facing and 360-degree scenes, demonstrating significant performance improvements over previous few-view NeRF reconstruction approaches.

Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski• 2023

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisLLFF
PSNR25.21
124
Novel View SynthesisRealEstate10K
PSNR31.82
116
Novel View SynthesisMip-NeRF360
PSNR18.19
104
Novel View SynthesisDTU
PSNR24.62
100
Novel View SynthesisLLFF 3-view
PSNR21.34
95
Novel View SynthesisDTU (test)
PSNR24.62
82
Novel View SynthesisLLFF 9-view
PSNR25.21
75
Novel View SynthesisLLFF 6-view
PSNR24.25
74
Novel View SynthesisCO3D
PSNR22.95
24
Few-view 3D ReconstructionRealEstate10K (test)
PSNR31.82
20
Showing 10 of 20 rows

Other info

Code

Follow for update