Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PTQD: Accurate Post-Training Quantization for Diffusion Models

About

Diffusion models have recently dominated image synthesis tasks. However, the iterative denoising process is expensive in computations at inference time, making diffusion models less practical for low-latency and scalable real-world applications. Post-training quantization (PTQ) of diffusion models can significantly reduce the model size and accelerate the sampling process without re-training. Nonetheless, applying existing PTQ methods directly to low-bit diffusion models can significantly impair the quality of generated samples. Specifically, for each denoising step, quantization noise leads to deviations in the estimated mean and mismatches with the predetermined variance schedule. As the sampling process proceeds, the quantization noise may accumulate, resulting in a low signal-to-noise ratio (SNR) during the later denoising steps. To address these challenges, we propose a unified formulation for the quantization noise and diffusion perturbed noise in the quantized denoising process. Specifically, we first disentangle the quantization noise into its correlated and residual uncorrelated parts regarding its full-precision counterpart. The correlated part can be easily corrected by estimating the correlation coefficient. For the uncorrelated part, we subtract the bias from the quantized results to correct the mean deviation and calibrate the denoising variance schedule to absorb the excess variance resulting from quantization. Moreover, we introduce a mixed-precision scheme for selecting the optimal bitwidth for each denoising step. Extensive experiments demonstrate that our method outperforms previous post-training quantized diffusion models, with only a 0.06 increase in FID score compared to full-precision LDM-4 on ImageNet 256x256, while saving 19.9x bit operations. Code is available at https://github.com/ziplab/PTQD.

Yefei He, Luping Liu, Jing Liu, Weijia Wu, Hong Zhou, Bohan Zhuang• 2023

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet 256x256 (val)
FID5.69
307
Class-conditional Image GenerationImageNet 256x256 (val)
FID4.02
293
Image GenerationImageNet 512x512 (val)
FID-50K73.45
184
Image GenerationLSUN Bedroom 256x256 (test)
FID3.75
73
Unconditional Image GenerationFFHQ 256x256
FID10.69
64
Text-to-Image GenerationMJHQ-30K
Overall FID36.84
59
Text-to-Image GenerationCOCO
FID33.78
51
Class-conditional Image GenerationImageNet-1K 256x256 (test)
FID10.4
50
Text-to-Image GenerationDCI
FID65.42
26
Image GenerationCelebA-HQ
FID21.08
23
Showing 10 of 16 rows

Other info

Code

Follow for update