Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers

About

Recent advancements in diffusion models, particularly the architectural transformation from UNet-based models to Diffusion Transformers (DiTs), significantly improve the quality and scalability of image and video generation. However, despite their impressive capabilities, the substantial computational costs of these large-scale models pose significant challenges for real-world deployment. Post-Training Quantization (PTQ) emerges as a promising solution, enabling model compression and accelerated inference for pretrained models, without the costly retraining. However, research on DiT quantization remains sparse, and existing PTQ frameworks, primarily designed for traditional diffusion models, tend to suffer from biased quantization, leading to notable performance degradation. In this work, we identify that DiTs typically exhibit significant spatial variance in both weights and activations, along with temporal variance in activations. To address these issues, we propose Q-DiT, a novel approach that seamlessly integrates two key techniques: automatic quantization granularity allocation to handle the significant variance of weights and activations across input channels, and sample-wise dynamic activation quantization to adaptively capture activation changes across both timesteps and samples. Extensive experiments conducted on ImageNet and VBench demonstrate the effectiveness of the proposed Q-DiT. Specifically, when quantizing DiT-XL/2 to W6A8 on ImageNet ($256 \times 256$), Q-DiT achieves a remarkable reduction in FID by 1.09 compared to the baseline. Under the more challenging W4A8 setting, it maintains high fidelity in image and video generation, establishing a new benchmark for efficient, high-quality quantization in DiTs. Code is available at \href{https://github.com/Juanerx/Q-DiT}{https://github.com/Juanerx/Q-DiT}.

Lei Chen, Yuan Meng, Chen Tang, Xinzhu Ma, Jingyan Jiang, Xin Wang, Zhi Wang, Wenwu Zhu• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score37.34
467
Image GenerationImageNet 256x256 (val)
FID5.32
307
Image GenerationImageNet 512x512 (val)
FID-50K6.24
184
Text-to-Video GenerationVBench--
111
Image Super-resolutionDRealSR
MANIQA0.4228
78
Class-conditional Image GenerationImageNet (val)
FID5.44
54
Real-world Image Super-ResolutionRealLQ250
MUSIQ0.594
26
Real-world Image Super-ResolutionRealLR200
MUSIQ58.16
26
Real-world Image Super-ResolutionDRealSR
LPIPS0.6748
23
Real-world Image Super-ResolutionRealSR
LPIPS0.6806
23
Showing 10 of 11 rows

Other info

Code

Follow for update