Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Depth Anything V2

About

This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research.

Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU58.6
2888
Robot ManipulationLIBERO
Goal Achievement87.6
700
Semantic segmentationCityscapes
mIoU85.6
658
3D Human Pose EstimationHuman3.6M (test)--
547
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)98.4
432
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.058
300
Monocular Depth EstimationKITTI (Eigen split)
Abs Rel0.149
215
Semantic segmentationSUN RGB-D (test)--
212
Depth EstimationNYU Depth V2
RMSE0.206
209
Monocular Depth EstimationKITTI
Abs Rel0.09
203
Showing 10 of 254 rows
...

Other info

Code

Follow for update