Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Harnessing Diffusion Models for Visual Perception with Meta Prompts

About

The issue of generative pretraining for vision models has persisted as a long-standing conundrum. At present, the text-to-image (T2I) diffusion model demonstrates remarkable proficiency in generating high-definition images matching textual inputs, a feat made possible through its pre-training on large-scale image-text pairs. This leads to a natural inquiry: can diffusion models be utilized to tackle visual perception tasks? In this paper, we propose a simple yet effective scheme to harness a diffusion model for visual perception tasks. Our key insight is to introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception. The effect of meta prompts are two-fold. First, as a direct replacement of the text embeddings in the T2I models, it can activate task-relevant features during feature extraction. Second, it will be used to re-arrange the extracted features to ensures that the model focuses on the most pertinent features for the task on hand. Additionally, we design a recurrent refinement training strategy that fully leverages the property of diffusion models, thereby yielding stronger visual features. Extensive experiments across various benchmarks validate the effectiveness of our approach. Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes. Concurrently, the proposed method attains results comparable to the current state-of-the-art in semantic segmentation on ADE20K and pose estimation on COCO datasets, further exemplifying its robustness and versatility.

Qiang Wan, Zilong Huang, Bingyi Kang, Jiashi Feng, Li Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)--
2731
Semantic segmentationCityscapes (test)
mIoU86.2
1145
Semantic segmentationADE20K
mIoU40.89
936
Monocular Depth EstimationKITTI (Eigen)
Abs Rel0.047
502
Semantic segmentationCityscapes (val)
mIoU87.1
287
Depth EstimationNYU Depth V2
RMSE0.223
177
Human Pose EstimationCOCO (val)
AP79
53
Semantic segmentationCityscapes
mIoU71.94
27
Showing 8 of 8 rows

Other info

Code

Follow for update