Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spherical View Synthesis for Self-Supervised 360 Depth Estimation

About

Learning based approaches for depth perception are limited by the availability of clean training data. This has led to the utilization of view synthesis as an indirect objective for learning depth estimation using efficient data acquisition procedures. Nonetheless, most research focuses on pinhole based monocular vision, with scarce works presenting results for omnidirectional input. In this work, we explore spherical view synthesis for learning monocular 360 depth in a self-supervised manner and demonstrate its feasibility. Under a purely geometrically derived formulation we present results for horizontal and vertical baselines, as well as for the trinocular case. Further, we show how to better exploit the expressiveness of traditional CNNs when applied to the equirectangular domain in an efficient manner. Finally, given the availability of ground truth depth data, our work is uniquely positioned to compare view synthesis against direct supervision in a consistent and fair manner. The results indicate that alternative research directions might be better suited to enable higher quality depth perception. Our data, models and code are publicly available at https://vcl3d.github.io/SphericalViewSynthesis/.

Nikolaos Zioulis, Antonis Karakottas, Dimitrios Zarpalas, Federico Alvarez, Petros Daras• 2019

Related benchmarks

TaskDatasetResultRank
Depth EstimationMatterport3D
delta189.84
35
Depth EstimationStructure3D (test)
AbsRel0.1142
18
Monocular Depth EstimationPanoSunCG
RMSE0.6965
11
Depth Estimation3D60 Stanford3D
AbsRel0.1003
6
Depth Estimation3D60 Matterport3D
AbsRel0.1063
6
Monocular Depth EstimationStanford3D
Abs-Rel Error0.1003
6
Depth Estimation3D60 SunCG
AbsRel0.1867
6
Monocular Depth EstimationStanford (test)
AbsRel0.1844
5
Showing 8 of 8 rows

Other info

Follow for update