Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture

About

In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.

David Eigen, Rob Fergus• 2014

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPASCAL VOC 2012 (test)
mIoU62.6
1342
Monocular Depth EstimationKITTI (Eigen)
Abs Rel0.203
502
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)76.9
423
Depth EstimationKITTI (Eigen split)
RMSE6.307
276
Semantic segmentationNYU v2 (test)
mIoU34.1
248
Surface Normal EstimationNYU v2 (test)
Mean Angle Distance (MAD)20.9
206
Depth EstimationNYU Depth V2
RMSE0.641
177
Semantic segmentationNYU Depth V2 (test)
mIoU34.1
172
Monocular Depth EstimationKITTI Raw Eigen (test)
RMSE7.156
159
Depth PredictionNYU Depth V2 (test)
Accuracy (δ < 1.25)76.9
113
Showing 10 of 34 rows

Other info

Code

Follow for update