Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Dynamic Radiance Fields

About

Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene. Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms. These methods, thus, are unreliable as SfM algorithms often fail or produce erroneous poses on challenging videos with highly dynamic objects, poorly textured surfaces, and rotating camera motion. We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters (poses and focal length). We demonstrate the robustness of our approach via extensive quantitative and qualitative experiments. Our results show favorable performance over the state-of-the-art dynamic view synthesis methods.

Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang• 2023

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisiPhone DyCheck 7 scenes 2x resolution
mPSNR17.1
31
4D ReconstructionDyCheck (test)
mPSNR17.09
21
Dynamic Scene Novel View SynthesisNVIDIA video dataset average over all scenes 112
PSNR25.89
17
Novel View SynthesisNvidia Dataset
PSNR25.89
15
Camera pose estimationMPI Sintel
ATE (m)0.089
11
Novel View SynthesisDynamic Scene
PSNR (Jumping)25.66
9
Novel View SynthesisStereo Blur Dataset (test)
PSNR20.88
9
Video ReconstructionTap-Vid DAVIS
PSNR24.79
7
Novel View SynthesisNVIDIA Dynamic Scene Dataset 43 (original)
PSNR25.89
6
Novel View SynthesisiPhone dataset v1 (test)
mPSNR (Apple)18.73
5
Showing 10 of 16 rows

Other info

Code

Follow for update