Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Depth-Guided Urban View Synthesis

About

Recent advances in implicit scene representation enable high-fidelity street view novel view synthesis. However, existing methods optimize a neural radiance field for each scene, relying heavily on dense training images and extensive computation resources. To mitigate this shortcoming, we introduce a new method called Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning. Different from prior generalizable methods that infer geometry based on feature matching, EDUS leverages noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images. The geometric priors allow us to apply our generalizable model directly in the 3D space, gaining robustness across various sparsity levels. Through comprehensive experiments on the KITTI-360 and Waymo datasets, we demonstrate promising generalization abilities on novel street scenes. Moreover, our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.

Sheng Miao, Jiaxin Huang, Dongfeng Bai, Weichao Qiu, Bingbing Liu, Andreas Geiger, Yiyi Liao• 2024

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisKITTI-360 (val)
PSNR22.13
10
Novel View SynthesisKITTI-360 static (val)
PSNR22.13
9
Novel View SynthesisWaymo Open Dataset Out-of-Domain (val)
PSNR23.18
9
New View SynthesisWaymo (val)
PSNR (dB)23.41
9
Novel View SynthesisWaymo Open Dataset zero-shot
PSNR23.18
6
Showing 5 of 5 rows

Other info

Follow for update