Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Equivariant Neural Rendering

About

We propose a framework for learning neural scene representations directly from images, without 3D supervision. Our key insight is that 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. Specifically, we introduce a loss which enforces equivariance of the scene representation with respect to 3D transformations. Our formulation allows us to infer and render scenes in real time while achieving comparable results to models requiring minutes for inference. In addition, we introduce two challenging new datasets for scene representation and neural rendering, including scenes with complex lighting and backgrounds. Through experiments, we show that our model achieves compelling results on these datasets as well as on standard ShapeNet benchmarks.

Emilien Dupont, Miguel Angel Bautista, Alex Colburn, Aditya Sankar, Carlos Guestrin, Josh Susskind, Qi Shan• 2020

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisShapeNet cars category
PSNR22.26
20
Novel View SynthesisShapeNet chairs
SSIM0.91
9
3D ReconstructionShapeNet-SRN chairs (test)
PSNR22.83
8
Image ReconstructionRMNIST (Rotated MNIST)
PSNR21.77
6
Image ReconstructionRBMN (Rotated and Blocked MNIST)
PSNR17.3
6
Image ReconstructionAdrenals
PSNR21.77
6
Showing 6 of 6 rows

Other info

Follow for update