Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models

About

In the era of AIGC, the demand for low-budget or even on-device applications of diffusion models emerged. In terms of compressing the Stable Diffusion models (SDMs), several approaches have been proposed, and most of them leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along with knowledge distillation to recover the network performance. However, such a handcrafting manner of layer removal is inefficient and lacks scalability and generalization, and the feature distillation employed in the retraining phase faces an imbalance issue that a few numerically significant feature loss terms dominate over others throughout the retraining process. To this end, we proposed the layer pruning and normalized distillation for compressing diffusion models (LAPTOP-Diff). We, 1) introduced the layer pruning method to compress SDM's U-Net automatically and proposed an effective one-shot pruning criterion whose one-shot performance is guaranteed by its good additivity property, surpassing other layer pruning and handcrafted layer removal methods, 2) proposed the normalized feature distillation for retraining, alleviated the imbalance issue. Using the proposed LAPTOP-Diff, we compressed the U-Nets of SDXL and SDM-v1.5 for the most advanced performance, achieving a minimal 4.0% decline in PickScore at a pruning ratio of 50% while the comparative methods' minimal PickScore decline is 8.2%.

Dingkun Zhang, Sijia Li, Chen Chen, Qingsong Xie, Haonan Lu• 2024

Related benchmarks

TaskDatasetResultRank
Image GenerationLSUN church
FID24.73
117
Class-conditioned image generationImageNet-1k 1.0 (test val)
FID41.52
100
Image GenerationCelebA
FID10.52
65
Image GenerationMRI
FDD0.051
22
Image GenerationHubble
FDD0.141
22
Image GenerationBedroom
FID22.85
22
Image GenerationPokemon
FDD0.466
22
Showing 7 of 7 rows

Other info

Follow for update