Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

About

We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input. To do this, we introduce Neural Scene Flow Fields, a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion. Our representation is optimized through a neural network to fit the observed input views. We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion. We conduct a number of experiments that demonstrate our approach significantly outperforms recent monocular view synthesis methods, and show qualitative results of space-time view synthesis on a variety of real-world videos.

Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang• 2020

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisiPhone DyCheck 7 scenes 2x resolution
mPSNR15.46
31
Video RepresentationND Scene (Individual Sequences)
PSNR34.74
21
4D ReconstructionDyCheck (test)
mPSNR15.46
21
Dynamic Scene Novel View SynthesisNVIDIA video dataset average over all scenes 112
PSNR24.33
17
Inference EfficiencySynthetic Lego scene (test)
Storage (MB)14.17
15
Novel View SynthesisNvidia Dataset
PSNR24.33
15
Novel View SynthesisDyCheck (test)
mPSNR16.45
15
Novel View Synthesisreal dynamic scenes (test)
PSNR26.3
13
Novel View SynthesisStereo Blur Dataset (test)
PSNR23.79
9
Novel View SynthesisDynamic Scene
PSNR (Jumping)24.65
9
Showing 10 of 29 rows

Other info

Follow for update