Perception Prioritized Training of Diffusion Models
About
Diffusion models learn to restore noisy data, which is corrupted with different levels of noise, by optimizing the weighted sum of the corresponding loss terms, i.e., denoising score matching loss. In this paper, we show that restoring data corrupted with certain noise levels offers a proper pretext task for the model to learn rich visual concepts. We propose to prioritize such noise levels over other levels during training, by redesigning the weighting scheme of the objective function. We show that our simple redesign of the weighting scheme significantly improves the performance of diffusion models regardless of the datasets, architectures, and sampling strategies.
Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, Sungroh Yoon• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unconditional Image Generation | CIFAR-10 (test) | FID5.63 | 216 | |
| Unconditional Image Generation | CelebA unconditional 64 x 64 | FID7.22 | 95 | |
| Text-to-Image Generation | MS-COCO | FID13.23 | 75 | |
| Unconditional Image Generation | FFHQ 256x256 | FID6.97 | 64 | |
| Image Generation | FFHQ | FID6.92 | 52 | |
| Text-to-Image Generation | PartiPrompts | CLIP Score29.5 | 26 | |
| Unconditional Image Generation | FFHQ 256x256 (test) | FID7 | 25 | |
| Image Generation | CelebA-HQ | FID6.91 | 23 | |
| Unconditional Image Generation | LSUN Church (test) | FID10.77 | 17 | |
| Unconditional Image Generation | LSUN Bedroom (test) | FID6.53 | 14 |
Showing 10 of 12 rows