Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Broadening View Synthesis of Dynamic Scenes from Constrained Monocular Videos

About

In dynamic Neural Radiance Fields (NeRF) systems, state-of-the-art novel view synthesis methods often fail under significant viewpoint deviations, producing unstable and unrealistic renderings. To address this, we introduce Expanded Dynamic NeRF (ExpanDyNeRF), a monocular NeRF framework that leverages Gaussian splatting priors and a pseudo-ground-truth generation strategy to enable realistic synthesis under large-angle rotations. ExpanDyNeRF optimizes density and color features to improve scene reconstruction from challenging perspectives. We also present the Synthetic Dynamic Multiview (SynDM) dataset, the first synthetic multiview dataset for dynamic scenes with explicit side-view supervision-created using a custom GTA V-based rendering pipeline. Quantitative and qualitative results on SynDM and real-world datasets demonstrate that ExpanDyNeRF significantly outperforms existing dynamic NeRF methods in rendering fidelity under extreme viewpoint shifts. Further details are provided in the supplementary materials.

Le Jiang, Shaotong Zhu, Yedi Luo, Shayda Moezzi, Sarah Ostadabbas• 2025

Related benchmarks

TaskDatasetResultRank
Dynamic Novel View SynthesisDyNeRF Coffee scene (test)
FID132.4
5
Dynamic Novel View SynthesisDyNeRF Beef scene (test)
FID135.8
5
Dynamic Scene Novel View SynthesisSynDM
FID (Human)85.61
5
Dynamic Scene ReconstructionNVIDIA Dynamic Scenes Skate (test)
FID90.83
5
Dynamic Scene ReconstructionNVIDIA Dynamic Scenes Truck (test)
FID69.37
5
Showing 5 of 5 rows

Other info

Follow for update