Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differentially Private Diffusion Models

About

While modern machine learning models rely on increasingly large training datasets, data is often limited in privacy-sensitive domains. Generative models trained with differential privacy (DP) on sensitive data can sidestep this challenge, providing access to synthetic data instead. We build on the recent success of diffusion models (DMs) and introduce Differentially Private Diffusion Models (DPDMs), which enforce privacy using differentially private stochastic gradient descent (DP-SGD). We investigate the DM parameterization and the sampling algorithm, which turn out to be crucial ingredients in DPDMs, and propose noise multiplicity, a powerful modification of DP-SGD tailored to the training of DMs. We validate our novel DPDMs on image generation benchmarks and achieve state-of-the-art performance in all experiments. Moreover, on standard benchmarks, classifiers trained on DPDM-generated synthetic data perform on par with task-specific DP-SGD-trained classifiers, which has not been demonstrated before for DP generative models. Project page and code: https://nv-tlabs.github.io/DPDM.

Tim Dockhorn, Tianshi Cao, Arash Vahdat, Karsten Kreis• 2022

Related benchmarks

TaskDatasetResultRank
Image GenerationCelebA 64 x 64 (test)
FID106.7
203
Image GenerationCelebA 32x32 (test)
FID28.8
17
Differentially Private Image SynthesisMNIST
FID4.4
16
Differentially Private Image SynthesisF-MNIST
FID17.1
16
Differentially Private Image SynthesisCIFAR-10
FID110.1
16
Differentially Private Image SynthesisCelebA
FID28.8
16
Differentially Private Image SynthesisCAMELYON
FID29.2
16
Image GenerationCelebA 128x128 (test)
FID210.8
14
Showing 8 of 8 rows

Other info

Follow for update