Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-Resolution

About

Low-bit model quantization for image super-resolution (SR) is a longstanding task that is renowned for its surprising compression and acceleration ability. However, accuracy degradation is inevitable when compressing the full-precision (FP) model to ultra-low bit widths (2~4 bits). Experimentally, we observe that the degradation of quantization is mainly attributed to the quantization of activation instead of model weights. In numerical analysis, the condition number of weights could measure how much the output value can change for a small change in the input argument, inherently reflecting the quantization error. Therefore, we propose CondiQuant, a condition number based low-bit post-training quantization for image super-resolution. Specifically, we formulate the quantization error as the condition number of weight metrics. By decoupling the representation ability and the quantization sensitivity, we design an efficient proximal gradient descent algorithm to iteratively minimize the condition number and maintain the output still. With comprehensive experiments, we demonstrate that CondiQuant outperforms existing state-of-the-art post-training quantization methods in accuracy without computation overhead and gains the theoretically optimal compression ratio in model parameters. Our code and model are released at https://github.com/Kai-Liu001/CondiQuant.

Kai Liu, Dehui Wang, Zhiteng Li, Zheng Chen, Yong Guo, Wenbo Li, Linghe Kong, Yulun Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionManga109
PSNR38.57
656
Image Super-resolutionSet5
PSNR38.03
507
Single Image Super-ResolutionUrban100
PSNR32.03
500
Image Super-resolutionSet14
PSNR33.5
329
Image Super-resolutionUrban100
PSNR28.05
221
Image Super-resolutionB100
PSNR32.16
51
Showing 6 of 6 rows

Other info

Follow for update