Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Depth Anything 3: Recovering the Visual Space from Any Views

About

We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. In pursuit of minimal modeling, DA3 yields two key insights: a single plain transformer (e.g., vanilla DINO encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2). We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 44.3% in camera pose accuracy and 25.1% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets.

Haotong Lin, Sili Chen, Junhao Liew, Donny Y. Chen, Zhenyu Li, Guang Shi, Jiashi Feng, Bingyi Kang• 2025

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisRE10K
SSIM71.5
142
Monocular Depth EstimationETH3D
AbsRel11
132
Monocular Depth EstimationDIODE
AbsRel24.2
113
3D Reconstruction7 Scenes--
94
Monocular Depth EstimationSintel
Abs Rel0.1575
91
Novel View SynthesisRe10K (test)
PSNR22.582
79
Novel View SynthesisScanNet++
PSNR17.973
67
Video Depth EstimationSintel (short sequences)
Abs Rel0.278
42
Video Depth EstimationBonn short sequences
Abs Rel0.052
42
Video Depth EstimationKITTI short sequences
Abs Rel0.045
42
Showing 10 of 73 rows
...

Other info

GitHub

Follow for update