Structural Pruning for Diffusion Models
About
Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs). The impressive capability of these models, however, often entails significant computational overhead during both training and inference. To tackle this challenge, we present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones, without the need for extensive re-training. The essence of Diff-Pruning is encapsulated in a Taylor expansion over pruned timesteps, a process that disregards non-contributory diffusion steps and ensembles informative gradients to identify important weights. Our empirical assessment, undertaken across several datasets highlights two primary benefits of our proposed method: 1) Efficiency: it enables approximately a 50\% reduction in FLOPs at a mere 10\% to 20\% of the original training expenditure; 2) Consistency: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained models. Code is available at \url{https://github.com/VainF/Diff-Pruning}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-conditional Image Generation | ImageNet 256x256 | Inception Score (IS)201.8 | 815 | |
| Class-conditional Image Generation | ImageNet 256x256 (train) | IS214.4 | 345 | |
| Image Generation | ImageNet (val) | Inception Score156.4 | 247 | |
| Class-conditional Image Generation | ImageNet 256x256 (test) | FID9.27 | 208 | |
| Image Generation | CIFAR10 32x32 (test) | FID1.97 | 183 | |
| Image Generation | CIFAR-10 32x32 | FID3.73 | 147 | |
| Class-conditional Image Generation | ImageNet 64x64 (test) | FID2.57 | 91 | |
| Image Generation | FFHQ 64x64 (test) | FID2.39 | 82 | |
| Image Generation | CelebA-64 | FID2.87 | 75 | |
| Unconditional Image Generation | LSUN Bedroom 256x256 | FID18.6 | 68 |