Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Compact Single Image Super-Resolution via Contrastive Self-distillation

About

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.

Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao• 2021

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet5 x2
PSNR38.001
134
Super-ResolutionSet5 x3
PSNR34.378
108
Super-ResolutionUrban100 x2
PSNR31.944
86
Super-ResolutionUrban100 x4
PSNR25.998
85
Super-ResolutionUrban100 x3
PSNR28.02
79
Super-ResolutionSet5 x4
PSNR32.112
68
Super-ResolutionSet14 x3
PSNR30.309
64
Super-ResolutionB100 x2
PSNR32.16
31
Super-ResolutionSet14 x4
PSNR28.563
29
Super-ResolutionSet14 x2
PSNR33.536
29
Showing 10 of 12 rows

Other info

Follow for update