Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction

About

Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation. In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize. Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions. Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also enhances efficiency, being significantly faster than most existing diffusion-based methods. Lotus' superior quality and efficiency also enable a wide range of practical applications, such as joint estimation, single/multi-view 3D reconstruction, etc. Project page: https://lotus3d.github.io/.

Jing He, Haodong Li, Wei Yin, Yixun Liang, Leheng Li, Kaiqiang Zhou, Hongbo Zhang, Bingbing Liu, Ying-Cong Chen• 2024

Related benchmarks

TaskDatasetResultRank
Monocular Depth EstimationKITTI (Eigen)
Abs Rel9.3
523
Monocular Depth EstimationNYU v2 (test)
Abs Rel0.051
300
Monocular Depth EstimationKITTI
Abs Rel0.081
203
Monocular Depth EstimationETH3D
AbsRel5.9
132
Monocular Depth EstimationNYU V2
Delta 1 Acc97
131
Surface Normal PredictionNYU V2
Mean Error16.2
118
Monocular Depth EstimationDIODE
AbsRel9.8
113
Depth EstimationScanNet
AbsRel0.06
108
Depth EstimationKITTI
AbsRel0.113
106
Monocular Depth EstimationKITTI Improved GT (Eigen)
AbsRel0.081
92
Showing 10 of 52 rows

Other info

Follow for update