Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

XPSR: Cross-modal Priors for Diffusion-based Image Super-Resolution

About

Diffusion-based methods, endowed with a formidable generative prior, have received increasing attention in Image Super-Resolution (ISR) recently. However, as low-resolution (LR) images often undergo severe degradation, it is challenging for ISR models to perceive the semantic and degradation information, resulting in restoration images with incorrect content or unrealistic artifacts. To address these issues, we propose a \textit{Cross-modal Priors for Super-Resolution (XPSR)} framework. Within XPSR, to acquire precise and comprehensive semantic conditions for the diffusion model, cutting-edge Multimodal Large Language Models (MLLMs) are utilized. To facilitate better fusion of cross-modal priors, a \textit{Semantic-Fusion Attention} is raised. To distill semantic-preserved information instead of undesired degradations, a \textit{Degradation-Free Constraint} is attached between LR and its high-resolution (HR) counterpart. Quantitative and qualitative results show that XPSR is capable of generating high-fidelity and high-realism images across synthetic and real-world datasets. Codes are released at \url{https://github.com/qyp2000/XPSR}.

Yunpeng Qu, Kun Yuan, Kai Zhao, Qizhi Xie, Jinhua Hao, Ming Sun, Chao Zhou• 2024

Related benchmarks

TaskDatasetResultRank
Super-ResolutionUrban bicubic downsampling (test)
PSNR18.95
60
Super-ResolutionDIV2K bicubic downsampling (test)
PSNR20.25
60
Real-World Super-ResolutionDIV2K real-world degradation (test)
PSNR19.42
36
Real-World Super-ResolutionRealSR v1.0 (test)
PSNR18.99
8
Showing 4 of 4 rows

Other info

Follow for update