Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Guiding Monocular Depth Estimation Using Depth-Attention Volume

About

Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations. In recent works, those priors have been learned in an end-to-end manner from large datasets by using deep neural networks. In this paper, we propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments. This is achieved by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV). Experiments on two popular indoor datasets, namely NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results while using only a fraction of the number of parameters needed by the competing methods.

Lam Huynh, Phong Nguyen-Ha, Jiri Matas, Esa Rahtu, Janne Heikkila• 2020

Related benchmarks

TaskDatasetResultRank
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)88.2
423
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.108
257
Monocular Depth EstimationNYU-Depth v2 (official)
Abs Rel0.108
75
Depth EstimationNYU v2 (val)
RMSE0.412
53
Monocular Depth EstimationNYU Depth Eigen v2 (test)
A.Rel0.108
49
Showing 5 of 5 rows

Other info

Follow for update