Exploiting Diffusion Prior for Generalizable Dense Prediction
About
Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap. We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks. To address the misalignment between deterministic prediction tasks and stochastic T2I models, we reformulate the diffusion process through a sequence of interpolations, establishing a deterministic mapping between input RGB images and output prediction distributions. To preserve generalizability, we use low-rank adaptation to fine-tune pre-trained models. Extensive experiments across five tasks, including 3D property estimation, semantic segmentation, and intrinsic image decomposition, showcase the efficacy of the proposed method. Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Depth Estimation | NYU Depth V2 | -- | 177 | |
| Depth Estimation | ScanNet | AbsRel0.146 | 94 | |
| Depth Estimation | KITTI | AbsRel0.24 | 92 | |
| Depth Estimation | DIODE | Delta-1 Accuracy70.6 | 62 | |
| Depth Estimation | NYU | AbsRel0.109 | 20 | |
| Depth Estimation | ETH3D | AbsRel0.128 | 19 | |
| Surface Normal Estimation | Bedroom Images In-domain | L1 Error0.0514 | 11 | |
| Monocular Depth Estimation | Bedroom Images In-domain | REL10.72 | 8 | |
| Monocular Depth Estimation | Generalization Images Out-of-domain | Relative Error (REL)0.2117 | 8 | |
| Surface Normal Estimation | Generalization Images Out-of-domain | L1 Error0.0872 | 8 |