Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiDaS v3.1 -- A Model Zoo for Robust Monocular Relative Depth Estimation

About

We release MiDaS v3.1 for monocular depth estimation, offering a variety of new models based on different encoder backbones. This release is motivated by the success of transformers in computer vision, with a large variety of pretrained vision transformers now available. We explore how using the most promising vision transformers as image encoders impacts depth estimation quality and runtime of the MiDaS architecture. Our investigation also includes recent convolutional approaches that achieve comparable quality to vision transformers in image classification tasks. While the previous release MiDaS v3.0 solely leverages the vanilla vision transformer ViT, MiDaS v3.1 offers additional models based on BEiT, Swin, SwinV2, Next-ViT and LeViT. These models offer different performance-runtime tradeoffs. The best model improves the depth estimation quality by 28% while efficient models enable downstream tasks requiring high frame rates. We also describe the general process for integrating new backbones. A video summarizing the work can be found at https://youtu.be/UjaeNNFf9sE and the code is available at https://github.com/isl-org/MiDaS.

Reiner Birkl, Diana Wofk, Matthias M\"uller• 2023

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI--
161
Monocular Depth EstimationETH3D
AbsRel6.1
117
Monocular Depth EstimationNYU V2
Delta 1 Acc98
113
Monocular Depth EstimationETH-3D (test)
A.Rel0.139
38
Depth EstimationDIODE (test)
AbsRel0.075
33
Video Depth EstimationVDW (test)
Delta 167.2
24
Monocular Depth EstimationKITTI official (val)--
23
Monocular Depth EstimationDIW
WHDR0.103
19
Relative Depth EstimationSintel (test)
AbsRel0.587
15
Relative Depth EstimationKITTI 18 (test)
AbsRel0.127
11
Showing 10 of 25 rows

Other info

Code

Follow for update