Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How to Use Diffusion Priors under Sparse Views?

About

Novel view synthesis under sparse views has been a long-term important challenge in 3D reconstruction. Existing works mainly rely on introducing external semantic or depth priors to supervise the optimization of 3D representations. However, the diffusion model, as an external prior that can directly provide visual supervision, has always underperformed in sparse-view 3D reconstruction using Score Distillation Sampling (SDS) due to the low information entropy of sparse views compared to text, leading to optimization challenges caused by mode deviation. To this end, we present a thorough analysis of SDS from the mode-seeking perspective and propose Inline Prior Guided Score Matching (IPSM), which leverages visual inline priors provided by pose relationships between viewpoints to rectify the rendered image distribution and decomposes the original optimization objective of SDS, thereby offering effective diffusion visual guidance without any fine-tuning or pre-training. Furthermore, we propose the IPSM-Gaussian pipeline, which adopts 3D Gaussian Splatting as the backbone and supplements depth and geometry consistency regularization based on IPSM to further improve inline priors and rectified distribution. Experimental results on different public datasets show that our method achieves state-of-the-art reconstruction quality. The code is released at https://github.com/iCVTEAM/IPSM.

Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li• 2024

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisDTU (test)
PSNR19.99
82
Novel View SynthesisLLFF 9-view
PSNR25.2
75
Novel View SynthesisLLFF 6-view
PSNR23.98
74
Novel View SynthesisLLFF Forward-facing (test)
PSNR20.44
20
Novel View SynthesisMip-NeRF360 (novel views)
PSNR12.85
12
Novel View SynthesisMipNeRF-360 Extrapolation Scenarios
SSIM0.267
8
Showing 6 of 6 rows

Other info

Code

Follow for update