Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AddSR: Accelerating Diffusion-based Blind Super-Resolution with Adversarial Diffusion Distillation

About

Blind super-resolution methods based on stable diffusion showcase formidable generative capabilities in reconstructing clear high-resolution images with intricate details from low-resolution inputs. However, their practical applicability is often hampered by poor efficiency, stemming from the requirement of thousands or hundreds of sampling steps. Inspired by the efficient adversarial diffusion distillation (ADD), we design~\name~to address this issue by incorporating the ideas of both distillation and ControlNet. Specifically, we first propose a prediction-based self-refinement strategy to provide high-frequency information in the student model output with marginal additional time cost. Furthermore, we refine the training process by employing HR images, rather than LR images, to regulate the teacher model, providing a more robust constraint for distillation. Second, we introduce a timestep-adaptive ADD to address the perception-distortion imbalance problem introduced by original ADD. Extensive experiments demonstrate our~\name~generates better restoration results, while achieving faster speed than previous SD-based state-of-the-art models (e.g., $7$$\times$ faster than SeeSR).

Rui Xie, Chen Zhao, Kai Zhang, Zhenyu Zhang, Jun Zhou, Jian Yang, Ying Tai• 2024

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionDRealSR
MANIQA0.6014
78
Image Super-resolutionRealSR
PSNR22.53
71
Image Super-resolutionDIV2K (val)
LPIPS0.3779
59
Showing 3 of 3 rows

Other info

Follow for update