Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Progressive Focused Transformer for Single Image Super-Resolution

About

Transformer-based methods have achieved remarkable results in image super-resolution tasks because they can capture non-local dependencies in low-quality input images. However, this feature-intensive modeling approach is computationally expensive because it calculates the similarities between numerous features that are irrelevant to the query features when obtaining attention weights. These unnecessary similarity calculations not only degrade the reconstruction performance but also introduce significant computational overhead. How to accurately identify the features that are important to the current query features and avoid similarity calculations between irrelevant features remains an urgent problem. To address this issue, we propose a novel and effective Progressive Focused Transformer (PFT) that links all isolated attention maps in the network through Progressive Focused Attention (PFA) to focus attention on the most important tokens. PFA not only enables the network to capture more critical similar features, but also significantly reduces the computational cost of the overall network by filtering out irrelevant features before calculating similarities. Extensive experiments demonstrate the effectiveness of the proposed method, achieving state-of-the-art performance on various single image super-resolution benchmarks.

Wei Long, Xingyu Zhou, Leheng Zhang, Shuhang Gu• 2025

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionManga109
PSNR40.49
821
Super-ResolutionSet5
PSNR38.67
785
Image Super-resolutionSet5
PSNR38.68
692
Super-ResolutionUrban100
PSNR33.67
652
Super-ResolutionSet14
PSNR35.11
613
Image Super-resolutionSet14
PSNR35
506
Image Super-resolutionUrban100
PSNR34.9
406
Super-ResolutionManga109
PSNR40.64
330
Super-ResolutionBSD100
PSNR32.71
329
Image Super-resolutionB100
PSNR32.67
101
Showing 10 of 24 rows

Other info

Code

Follow for update