Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

About

Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger• 2019

Related benchmarks

TaskDatasetResultRank
3D surface reconstructionDTU (test)--
69
Unconditional image synthesisFFHQ 256x256 (test)
FID31.5
31
Image SynthesisFFHQ
FID31.5
16
Novel View SynthesisShapeNet (test)
PSNR22.7
16
RenderingFFHQ
Total Rendering Time (ms)5
13
Unconditional image synthesisAFHQ 256x256 (test)
FID16.1
12
Image SynthesisAFHQ (full dataset)
FID16.1
8
Surface ReconstructionDTU scan118 mask (test)
Chamfer Loss0.71
5
Showing 8 of 8 rows

Other info

Follow for update