Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Scene Graphs for Dynamic Scenes

About

Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images. However, existing methods are restricted to learning efficient representations of static scenes that encode all scene objects into a single neural network, and lack the ability to represent dynamic scenes and decompositions into individual scene objects. In this work, we present the first neural rendering method that decomposes dynamic scenes into scene graphs. We propose a learned scene graph representation, which encodes object transformation and radiance, to efficiently render novel arrangements and views of the scene. To this end, we learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function. We assess the proposed method on synthetic and real automotive data, validating that our approach learns dynamic scenes -- only by observing a video of this scene -- and allows for rendering novel photo-realistic views of novel scene compositions with unseen sets of objects at unseen poses.

Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide• 2020

Related benchmarks

TaskDatasetResultRank
Scene ReconstructionnuScenes
PSNR21.67
17
Novel View SynthesisKITTI 75% views (train)
PSNR23.41
14
Novel View SynthesisKITTI 50% views (train)
PSNR23.23
14
Novel View SynthesisKITTI 25% views (train)
PSNR20
10
Novel View SynthesisVKITTI 2 (25% train views)
PSNR21.29
10
Driving Scene ReconstructionKITTI-360
PSNR22.89
10
Mono-view synthesisKITTI-360
PSNR22.89
8
Novel View SynthesisVKITTI2 75% (test)
PSNR23.41
7
Novel View SynthesisVKITTI2 50% (test)
PSNR23.23
7
Novel View SynthesisVKITTI2 25% (test)
PSNR21.29
7
Showing 10 of 22 rows

Other info

Follow for update