Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Local-Selective Feature Distillation for Single Image Super-Resolution

About

Recent improvements in convolutional neural network (CNN)-based single image super-resolution (SISR) methods rely heavily on fabricating network architectures, rather than finding a suitable training algorithm other than simply minimizing the regression loss. Adapting knowledge distillation (KD) can open a way for bringing further improvement for SISR, and it is also beneficial in terms of model efficiency. KD is a model compression method that improves the performance of Deep Neural Networks (DNNs) without using additional parameters for testing. It is getting the limelight recently for its competence at providing a better capacity-performance tradeoff. In this paper, we propose a novel feature distillation (FD) method which is suitable for SISR. We show the limitations of the existing FitNet-based FD method that it suffers in the SISR task, and propose to modify the existing FD algorithm to focus on local feature information. In addition, we propose a teacher-student-difference-based soft feature attention method that selectively focuses on specific pixel locations to extract feature information. We call our method local-selective feature distillation (LSFD) and verify that our method outperforms conventional FD methods in SISR problems.

SeongUk Park, Nojun Kwak• 2021

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet5 x2
PSNR38.189
134
Super-ResolutionSet5 x3
PSNR34.666
108
Super-ResolutionUrban100 x2
PSNR32.704
86
Super-ResolutionUrban100 x4
PSNR26.547
85
Super-ResolutionUrban100 x3
PSNR28.689
79
Super-ResolutionSet5 x4
PSNR32.497
68
Super-ResolutionSet14 x3
PSNR30.525
64
Super-ResolutionB100 x2
PSNR32.323
31
Super-ResolutionSet14 x2
PSNR33.882
29
Super-ResolutionSet14 x4
PSNR28.783
29
Showing 10 of 12 rows

Other info

Follow for update