Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution

About

Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods. However, these approaches suffer from essential shortsightedness created by only utilizing the standard self-attention-based reasoning. In this paper, we introduce an effective hybrid SR network to aggregate enriched features, including local features from CNNs and long-range multi-scale dependencies captured by transformers. Specifically, our network comprises transformer and convolutional branches, which synergetically complement each representation during the restoration procedure. Furthermore, we propose a cross-scale token attention module, allowing the transformer branch to exploit the informative relationships among tokens across different scales efficiently. Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.

Jinsu Yoo, Taehoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim• 2022

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet14 4x (test)
PSNR29.27
117
Super-ResolutionSet5 x2 (test)
PSNR38.53
95
Image Super-resolutionUrban100 x4 (test)
PSNR27.92
90
Super-ResolutionManga109 4x
PSNR32.44
88
Super-ResolutionSet5 3 (test)
PSNR (dB)35.09
87
Image Super-resolutionUrban100 x2 (test)
PSNR34.25
72
Image Super-resolutionUrban100 x3 (test)
PSNR30.26
58
Super-ResolutionBSD100 4x (test)
PSNR28
56
Image Super-resolutionManga109 x2 (test)
PSNR40.11
52
Super-ResolutionSet14 2x
PSNR34.68
51
Showing 10 of 15 rows

Other info

Follow for update