Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting RCAN: Improved Training for Image Super-Resolution

About

Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight. However, most SR models were optimized with dated training strategies. In this work, we revisit the popular RCAN model and examine the effect of different training options in SR. Surprisingly (or perhaps as expected), we show that RCAN can outperform or match nearly all the CNN-based SR architectures published after RCAN on standard benchmarks with a proper training strategy and minimal architecture change. Besides, although RCAN is a very large SR architecture with more than four hundred convolutional layers, we draw a notable conclusion that underfitting is still the main problem restricting the model capability instead of overfitting. We observe supportive evidence that increasing training iterations clearly improves the model performance while applying regularization techniques generally degrades the predictions. We denote our simply revised RCAN as RCAN-it and recommend practitioners to use it as baselines for future research. Code is publicly available at https://github.com/zudi-lin/rcan-it.

Zudi Lin, Prateek Garg, Atmadeep Banerjee, Salma Abdel Magid, Deqing Sun, Yulun Zhang, Luc Van Gool, Donglai Wei, Hanspeter Pfister• 2022

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionSet5
PSNR38.37
507
Super-ResolutionSet14 4x (test)
PSNR28.99
117
Image Super-resolutionUrban100 x4 (test)
PSNR27.16
90
Image Super-resolutionUrban100 x2 (test)
PSNR33.62
72
Image Super-resolutionUrban100 x3 (test)
PSNR29.38
58
Super-ResolutionBSD100 4x (test)
PSNR27.87
56
Image Super-resolutionManga109 x2 (test)
PSNR39.88
52
Super-ResolutionManga109 x3 (test)
PSNR34.92
49
Image Super-resolutionManga109 x4 (test)
PSNR31.78
44
Super-ResolutionSet14 x3 (test)
PSNR30.76
43
Showing 10 of 16 rows

Other info

Follow for update