Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator

About

Recent advances in zero-shot monocular depth estimation(MDE) have significantly improved generalization by unifying depth distributions through normalized depth representations and by leveraging large-scale unlabeled data via pseudo-label distillation. However, existing methods that rely on global depth normalization treat all depth values equally, which can amplify noise in pseudo-labels and reduce distillation effectiveness. In this paper, we present a systematic analysis of depth normalization strategies in the context of pseudo-label distillation. Our study shows that, under recent distillation paradigms (e.g., shared-context distillation), normalization is not always necessary, as omitting it can help mitigate the impact of noisy supervision. Furthermore, rather than focusing solely on how depth information is represented, we propose Cross-Context Distillation, which integrates both global and local depth cues to enhance pseudo-label quality. We also introduce an assistant-guided distillation strategy that incorporates complementary depth priors from a diffusion-based teacher model, enhancing supervision diversity and robustness. Extensive experiments on benchmark datasets demonstrate that our approach significantly outperforms state-of-the-art methods, both quantitatively and qualitatively.

Xiankang He, Dongyan Guo, Hongji Li, Ruibo Li, Ying Cui, Chi Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI
Abs Rel0.063
161
Monocular Depth EstimationETH3D
AbsRel5.4
117
Monocular Depth EstimationNYU V2
Delta 1 Acc98.5
113
Monocular Depth EstimationDIODE
AbsRel14.2
93
Monocular Depth EstimationScanNet
AbsRel4.3
64
Showing 5 of 5 rows

Other info

Code

Follow for update