Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Differentially Private Recurrent Language Models

About

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.

H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy66.58
3381
Image ClassificationEMNIST (test)
Accuracy78.06
174
Image ClassificationCIFAR-100 non-IID (test)
Test Accuracy (Avg Best)20.75
62
Image ClassificationNon-IID MNIST alpha=0.5 (test)
Accuracy77.96
12
Image ClassificationCIFAR-10 non-IID (α=0.1) (test)
Accuracy30.09
12
Face VerificationDigiFace 10K (test)
Recall@FAR=1e-372.37
4
Face VerificationDigiFace 10K (val)
Recall@FAR=1e-372.57
4
Image VerificationDigiFace
Recall@FAR=1e-3 (AllPair)13.38
2
Image VerificationEMNIST classes 36-62 (test)
Recall@FAR=1e-3 (Approx)9.78
2
Image VerificationGLD
Recall@FAR=1e-3 (Approx)24.48
2
Showing 10 of 11 rows

Other info

Follow for update