Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PAMS: Quantized Super-Resolution via Parameterized Max Scale

About

Deep convolutional neural networks (DCNNs) have shown dominant performance in the task of super-resolution (SR). However, their heavy memory cost and computation overhead significantly restrict their practical deployments on resource-limited devices, which mainly arise from the floating-point storage and operations between weights and activations. Although previous endeavors mainly resort to fixed-point operations, quantizing both weights and activations with fixed coding lengths may cause significant performance drop, especially on low bits. Specifically, most state-of-the-art SR models without batch normalization have a large dynamic quantization range, which also serves as another cause of performance drop. To address these two issues, we propose a new quantization scheme termed PArameterized Max Scale (PAMS), which applies the trainable truncated parameter to explore the upper bound of the quantization range adaptively. Finally, a structured knowledge transfer (SKT) loss is introduced to fine-tune the quantized network. Extensive experiments demonstrate that the proposed PAMS scheme can well compress and accelerate the existing SR models such as EDSR and RDN. Notably, 8-bit PAMS-EDSR improves PSNR on Set5 benchmark from 32.095dB to 32.124dB with 2.42$\times$ compression ratio, which achieves a new state-of-the-art.

Huixia Li, Chenqian Yan, Shaohui Lin, Xiawu Zheng, Yuchao Li, Baochang Zhang, Fan Yang, Rongrong Ji• 2020

Related benchmarks

TaskDatasetResultRank
Super-ResolutionSet14 (test)
PSNR33.2
246
Super-ResolutionUrban100 (test)
PSNR31.1
205
Super-ResolutionSet5 (test)
PSNR37.67
184
Super-ResolutionBSDS100 (test)
PSNR31.94
89
Showing 4 of 4 rows

Other info

Follow for update