Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Exploiting Diffusion Prior for Real-World Image Super-Resolution

About

We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution (SR). Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.

Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C.K. Chan, Chen Change Loy• 2023

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2643
Instance SegmentationCOCO 2017 (val)
APm0.146
1201
Semantic segmentationADE20K
mIoU19.6
1024
Super-ResolutionSet5
PSNR20.35
785
Super-ResolutionDIV2K
PSNR20.59
134
Image Super-resolutionRealSR
PSNR26.27
130
Image Super-resolutionDRealSR
MANIQA0.5601
130
Image Super-resolutionDIV2K (val)
LPIPS0.3113
106
Super-ResolutionODI-SR (test)
WS-PSNR22.29
93
Super-ResolutionSUN 360 Panorama (test)
WS-PSNR22.55
70
Showing 10 of 125 rows
...

Other info

Follow for update