Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-scale Attention Network for Single Image Super-Resolution

About

ConvNets can compete with transformers in high-level tasks by exploiting larger receptive fields. To unleash the potential of ConvNet in super-resolution, we propose a multi-scale attention network (MAN), by coupling classical multi-scale mechanism with emerging large kernel attention. In particular, we proposed multi-scale large kernel attention (MLKA) and gated spatial attention unit (GSAU). Through our MLKA, we modify large kernel attention with multi-scale and gate schemes to obtain the abundant attention map at various granularity levels, thereby aggregating global and local information and avoiding potential blocking artifacts. In GSAU, we integrate gate mechanism and spatial attention to remove the unnecessary linear layer and aggregate informative spatial context. To confirm the effectiveness of our designs, we evaluate MAN with multiple complexities by simply stacking different numbers of MLKA and GSAU. Experimental results illustrate that our MAN can perform on par with SwinIR and achieve varied trade-offs between state-of-the-art performance and computations.

Yan Wang, Yusen Li, Gang Wang, Xiaoguang Liu• 2022

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionManga109
PSNR31.25
656
Image Super-resolutionSet5
PSNR32.5
507
Image Super-resolutionSet14
PSNR28.87
329
Image Super-resolutionUrban100
PSNR26.7
221
Image Super-resolutionBSD100
PSNR (dB)27.77
210
Super-ResolutionSet5 x2
PSNR38.42
134
Super-ResolutionB100 x2
PSNR32.53
31
Classical Image Super-ResolutionSet5 2
PSNR38.44
27
Classical Image Super-ResolutionSet14 55
PSNR34.49
5
Super-ResolutionU100 x2
PSNR33.73
5
Showing 10 of 14 rows

Other info

Code

Follow for update