Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot

About

Large-scale text-to-image diffusion models, while powerful, suffer from prohibitive computational cost. Existing one-shot network pruning methods can hardly be directly applied to them due to the iterative denoising nature of diffusion models. To bridge the gap, this paper presents OBS-Diff, a novel one-shot pruning framework that enables accurate and training-free compression of large-scale text-to-image diffusion models. Specifically, (i) OBS-Diff revitalizes the classic Optimal Brain Surgeon (OBS), adapting it to the complex architectures of modern diffusion models and supporting diverse pruning granularity, including unstructured, N:M semi-structured, and structured (MHA heads and FFN neurons) sparsity; (ii) To align the pruning criteria with the iterative dynamics of the diffusion process, by examining the problem from an error-accumulation perspective, we propose a novel timestep-aware Hessian construction that incorporates a logarithmic-decrease weighting scheme, assigning greater importance to earlier timesteps to mitigate potential error accumulation; (iii) Furthermore, a computationally efficient group-wise sequential pruning strategy is proposed to amortize the expensive calibration process. Extensive experiments show that OBS-Diff achieves state-of-the-art one-shot pruning for diffusion models, delivering inference acceleration with minimal degradation in visual quality.

Junhan Zhu, Hesong Wang, Mingluo Su, Zefang Wang, Huan Wang• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2014 (val)
FID27.2
128
Image GenerationCIFAR-10 32x32
FID7.55
44
Text to ImageMS-COCO 5K prompts 2014 (val)
FID29.15
23
Text-to-Image GenerationSDXL U-Net (test)
FID29.08
10
Text-to-Image GenerationSD 3-medium (2B) (evaluation)
FID32.96
9
Showing 5 of 5 rows

Other info

GitHub

Follow for update