Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models

About

Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.

Patrick Schramowski, Manuel Brack, Bj\"orn Deiseroth, Kristian Kersting• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationCOCO
FID52.11
51
Text-to-Image GenerationMSCOCO 30K
FID17.95
42
Concept UnlearningUnlearnDiffAtk
UnlearnDiffAtk0.479
36
Text-to-Image GenerationCOCO 30k
FID16.9
29
Explicit Content RemovalI2P
Armpits Count47
28
Safe Text-to-Image GenerationCoPro V2 (test)
IP27
23
Safe Text-to-Image GenerationUnsafe Diffusion (UD)
IP Score30
23
Safe Text-to-Image GenerationCOCO 3K
FID36.29
23
Safe Text-to-Image GenerationI2P
Inappropriate Probability19
23
Image GenerationMS-COCO 30k (val)
FID16.34
22
Showing 10 of 62 rows

Other info

Code

Follow for update