Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Differentially Private Generative Adversarial Network

About

Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.

Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou• 2018

Related benchmarks

TaskDatasetResultRank
Image GenerationCelebA 64 x 64 (test)
FID395
203
Image GenerationCelebA 32x32 (test)
FID31.7
17
Differentially Private Image SynthesisCelebA
FID31.7
16
Differentially Private Image SynthesisCIFAR-10
FID138.7
16
Differentially Private Image SynthesisMNIST
FID30.3
16
Differentially Private Image SynthesisCAMELYON
FID66.9
16
Differentially Private Image SynthesisF-MNIST
FID74.8
16
Image ClassificationMNIST (test)
Accuracy (ε=10)80.11
14
Image GenerationCelebA 128x128 (test)
FID320.2
14
Image ClassificationCelebA-G (test)
Accuracy (ε=10)52.11
12
Showing 10 of 11 rows

Other info

Follow for update