Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Depth Anything V2

About

This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research.

Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU58.6
2731
Semantic segmentationCityscapes
mIoU85.6
578
3D Human Pose EstimationHuman3.6M (test)--
547
Robot ManipulationLIBERO
Goal Achievement87.6
494
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)98.4
423
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.058
257
Monocular Depth EstimationKITTI (Eigen split)
Abs Rel0.149
193
Depth EstimationNYU Depth V2
RMSE0.206
177
Monocular Depth EstimationKITTI
Abs Rel8
161
Monocular Depth EstimationETH3D
AbsRel0.37
117
Showing 10 of 199 rows
...

Other info

Code

Follow for update