Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

About

Score identity Distillation (SiD) is a data-free method that has achieved SOTA performance in image generation by leveraging only a pretrained diffusion model, without requiring any training data. However, its ultimate performance is constrained by how accurate the pretrained model captures the true data scores at different stages of the diffusion process. In this paper, we introduce SiDA (SiD with Adversarial Loss), which not only enhances generation quality but also improves distillation efficiency by incorporating real images and adversarial loss. SiDA utilizes the encoder from the generator's score network as a discriminator, allowing it to distinguish between real images and those generated by SiD. The adversarial loss is batch-normalized within each GPU and then combined with the original SiD loss. This integration effectively incorporates the average "fakeness" per GPU batch into the pixel-based SiD loss, enabling SiDA to distill a single-step generator. SiDA converges significantly faster than its predecessor when distilled from scratch, and swiftly improves upon the original model's performance during fine-tuning from a pre-distilled SiD generator. This one-step adversarial distillation method establishes new benchmarks in generation performance when distilling EDM diffusion models, achieving FID scores of 1.110 on ImageNet 64x64. When distilling EDM2 models trained on ImageNet 512x512, our SiDA method surpasses even the largest teacher model, EDM2-XXL, which achieved an FID of 1.81 using classifier-free guidance (CFG) and 63 generation steps. In contrast, SiDA achieves FID scores of 2.156 for size XS, 1.669 for S, 1.488 for M, 1.413 for L, 1.379 for XL, and 1.366 for XXL, all without CFG and in a single generation step. These results highlight substantial improvements across all model sizes. Our code is available at https://github.com/mingyuanzhou/SiD/tree/sida.

Mingyuan Zhou, Huangjie Zheng, Yi Gu, Zhendong Wang, Hai Huang• 2024

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet 512x512 (val)
FID-50K1.366
184
Unconditional Image GenerationCIFAR-10 unconditional
FID1.499
159
Image GenerationImageNet 64x64 (train val)
FID1.11
83
Conditional Image GenerationCIFAR-10
FID1.396
71
Image GenerationImageNet 512x512 (test)
FID1.366
57
Class-conditional Image GenerationImageNet 512x512 (val test)
FID1.37
40
Class-conditional Image GenerationImageNet 64x64 (train test)
FID1.1
30
Conditional Image GenerationCIFAR-10 class-conditional
FID1.4
29
Image GenerationAFHQ 64 v2 2020 (test)
FID1.276
10
Image GenerationFFHQ 64x64 (train test)
FID1.04
9
Showing 10 of 10 rows

Other info

Code

Follow for update