Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3D Photography using Context-aware Layered Depth Inpainting

About

We propose a method for converting a single RGB-D input image into a 3D photo - a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show fewer artifacts compared with the state of the arts.

Meng-Li Shih, Shih-Yang Su, Johannes Kopf, Jia-Bin Huang• 2020

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisACID (test)
PSNR14.87
18
Novel View SynthesisRealEstate10K t=5 (test)
LPIPS0.116
16
Scene ExtrapolationACID (test)
FID99.79
15
3D CinemagraphyHolynski (val)
Human Preference Score10.5
14
Novel View SynthesisRealEstate10K (RE10K) t=10 (test)
LPIPS0.266
14
Stereo Video SynthesisRealEstate10K (test)
FVD155
8
Novel View SynthesisMannequinChallenge t=3 v1 (test)
LPIPS0.495
6
Novel View SynthesisMannequinChallenge t=5 v1 (test)
LPIPS0.59
6
Showing 8 of 8 rows

Other info

Follow for update