Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Diffusion Prior for Generalizable Dense Prediction

About

Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap. We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks. To address the misalignment between deterministic prediction tasks and stochastic T2I models, we reformulate the diffusion process through a sequence of interpolations, establishing a deterministic mapping between input RGB images and output prediction distributions. To preserve generalizability, we use low-rank adaptation to fine-tune pre-trained models. Extensive experiments across five tasks, including 3D property estimation, semantic segmentation, and intrinsic image decomposition, showcase the efficacy of the proposed method. Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.

Hsin-Ying Lee, Hung-Yu Tseng, Hsin-Ying Lee, Ming-Hsuan Yang• 2023

Related benchmarks

TaskDatasetResultRank
Depth EstimationNYU Depth V2--
177
Depth EstimationScanNet
AbsRel0.146
94
Depth EstimationKITTI
AbsRel0.24
92
Depth EstimationDIODE
Delta-1 Accuracy70.6
62
Depth EstimationNYU
AbsRel0.109
20
Depth EstimationETH3D
AbsRel0.128
19
Surface Normal EstimationBedroom Images In-domain
L1 Error0.0514
11
Monocular Depth EstimationBedroom Images In-domain
REL10.72
8
Monocular Depth EstimationGeneralization Images Out-of-domain
Relative Error (REL)0.2117
8
Surface Normal EstimationGeneralization Images Out-of-domain
L1 Error0.0872
8
Showing 10 of 21 rows

Other info

Code

Follow for update