Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
About
Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead, significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Super-Resolution | Set5 x2 | PSNR38.001 | 134 | |
| Super-Resolution | Set5 x3 | PSNR34.378 | 108 | |
| Super-Resolution | Urban100 x2 | PSNR31.944 | 86 | |
| Super-Resolution | Urban100 x4 | PSNR25.998 | 85 | |
| Super-Resolution | Urban100 x3 | PSNR28.02 | 79 | |
| Super-Resolution | Set5 x4 | PSNR32.112 | 68 | |
| Super-Resolution | Set14 x3 | PSNR30.309 | 64 | |
| Super-Resolution | B100 x2 | PSNR32.16 | 31 | |
| Super-Resolution | Set14 x4 | PSNR28.563 | 29 | |
| Super-Resolution | Set14 x2 | PSNR33.536 | 29 |