Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling Monocular Foundation Model for Fine-grained Depth Completion

About

Depth completion involves predicting dense depth maps from sparse LiDAR inputs. However, sparse depth annotations from sensors limit the availability of dense supervision, which is necessary for learning detailed geometric features. In this paper, we propose a two-stage knowledge distillation framework that leverages powerful monocular foundation models to provide dense supervision for depth completion. In the first stage, we introduce a pre-training strategy that generates diverse training data from natural images, which distills geometric knowledge to depth completion. Specifically, we simulate LiDAR scans by utilizing monocular depth and mesh reconstruction, thereby creating training data without requiring ground-truth depth. Besides, monocular depth estimation suffers from inherent scale ambiguity in real-world settings. To address this, in the second stage, we employ a scale- and shift-invariant loss (SSI Loss) to learn real-world scales when fine-tuning on real-world datasets. Our two-stage distillation framework enables depth completion models to harness the strengths of monocular foundation models. Experimental results demonstrate that models trained with our two-stage distillation framework achieve state-of-the-art performance, ranking \textbf{first place} on the KITTI benchmark. Code is available at https://github.com/Sharpiless/DMD3C

Yingping Liang, Yutao Hu, Wenqi Shao, Ying Fu• 2025

Related benchmarks

TaskDatasetResultRank
Depth CompletionNYU-depth-v2 official (test)
RMSE0.085
187
Depth CompletionKITTI (test)
RMSE678.1
67
Depth CompletionKITTI depth completion (test)
RMSE0.6781
27
Depth CompletionNYU V2
RMSE0.085
19
Depth EstimationDDAD zero-shot
RMSE7.766
11
Depth CompletionScanNet zero-shot
RMSE0.101
4
Depth CompletionVOID1500 zero-shot
RMSE0.676
4
Showing 7 of 7 rows

Other info

Code

Follow for update