Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models

About

Diffusion models have demonstrated remarkable capabilities in image synthesis and related generative tasks. Nevertheless, their practicality for real-world applications is constrained by substantial computational costs and latency issues. Quantization is a dominant way to compress and accelerate diffusion models, where post-training quantization (PTQ) and quantization-aware training (QAT) are two main approaches, each bearing its own properties. While PTQ exhibits efficiency in terms of both time and data usage, it may lead to diminished performance in low bit-width. On the other hand, QAT can alleviate performance degradation but comes with substantial demands on computational and data resources. In this paper, we introduce a data-free and parameter-efficient fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to achieve QAT-level performance with PTQ-like efficiency. Specifically, we propose a quantization-aware variant of the low-rank adapter (QALoRA) that can be merged with model weights and jointly quantized to low bit-width. The fine-tuning process distills the denoising capabilities of the full-precision model into its quantized counterpart, eliminating the requirement for training data. We also introduce scale-aware optimization and temporal learned step-size quantization to further enhance performance. Extensive experimental results demonstrate that our method significantly outperforms previous PTQ-based diffusion models while maintaining similar time and data efficiency. Specifically, there is only a 0.05 sFID increase when quantizing both weights and activations of LDM-4 to 4-bit on ImageNet 256x256. Compared to QAT-based methods, our EfficientDM also boasts a 16.2x faster quantization speed with comparable generation quality. Code is available at \href{https://github.com/ThisisBillhe/EfficientDM}{this hrl}.

Yefei He, Jing Liu, Weijia Wu, Hong Zhou, Bohan Zhuang• 2023

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (val)
FID9.54
293
Class-conditional Image GenerationImageNet 256x256 (test)
FID8.54
167
Image Super-resolutionDRealSR
MANIQA0.5027
78
Image GenerationLSUN Bedroom 256x256 (test)
FID12.95
73
Image Super-resolutionDIV2K (val)
LPIPS0.5953
59
Real-world Image Super-ResolutionRealLR200
MUSIQ57.35
26
Real-world Image Super-ResolutionRealLQ250
MUSIQ0.5803
26
Real-world Image Super-ResolutionDRealSR
LPIPS0.7091
23
Real-world Image Super-ResolutionRealSR
LPIPS0.7126
23
Conditional Image GenerationImageNet 256x256
FID6.63
22
Showing 10 of 12 rows

Other info

Follow for update