Deep Learning with Differential Privacy
About
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Mart\'in Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 (test) | Accuracy80 | 906 | |
| Image Classification | MNIST (test) | Accuracy98.3 | 882 | |
| Language Modeling | WikiText-2 | Perplexity (PPL)20.17 | 841 | |
| Image Classification | CIFAR10 (test) | Accuracy66.23 | 585 | |
| Image Classification | Fashion MNIST (test) | Accuracy89.4 | 568 | |
| Image Classification | EMNIST (test) | Accuracy88.65 | 174 | |
| Image Classification | ImageNet-100 (test) | Clean Accuracy62.52 | 109 | |
| Classification | CelebA (test) | Average Accuracy67.8 | 92 | |
| Image Classification | MNIST | Clean Accuracy96 | 71 | |
| Image Classification | MNIST (test) | Accuracy94.53 | 61 |
Showing 10 of 42 rows