Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
About
In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.
David Eigen, Rob Fergus• 2014
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | PASCAL VOC 2012 (test) | mIoU62.6 | 1342 | |
| Monocular Depth Estimation | KITTI (Eigen) | Abs Rel0.203 | 502 | |
| Depth Estimation | NYU v2 (test) | Threshold Accuracy (delta < 1.25)76.9 | 423 | |
| Depth Estimation | KITTI (Eigen split) | RMSE6.307 | 276 | |
| Semantic segmentation | NYU v2 (test) | mIoU34.1 | 248 | |
| Surface Normal Estimation | NYU v2 (test) | Mean Angle Distance (MAD)20.9 | 206 | |
| Depth Estimation | NYU Depth V2 | RMSE0.641 | 177 | |
| Semantic segmentation | NYU Depth V2 (test) | mIoU34.1 | 172 | |
| Monocular Depth Estimation | KITTI Raw Eigen (test) | RMSE7.156 | 159 | |
| Depth Prediction | NYU Depth V2 (test) | Accuracy (δ < 1.25)76.9 | 113 |
Showing 10 of 34 rows