Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Membership Inference Attacks Against Language Classification Models

About

Artificial intelligence systems are prevalent in everyday life, with use cases in retail, manufacturing, health, and many other fields. With the rise in AI adoption, associated risks have been identified, including privacy risks to the people whose data was used to train models. Assessing the privacy risks of machine learning models is crucial to enabling knowledgeable decisions on whether to use, deploy, or share a model. A common approach to privacy risk assessment is to run one or more known attacks against the model and measure their success rate. We present a novel framework for running membership inference attacks against classification models. Our framework takes advantage of the ensemble method, generating many specialized attack models for different subsets of the data. We show that this approach achieves higher accuracy than either a single attack model or an attack model per class label, both on classical and language classification tasks.

Shlomit Shachor, Natalia Razinkov, Abigail Goldsteen• 2023

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackELD user-level (test)
TPR @ 0% FPR9
78
Membership Inference AttackTUH-EEG
ROC AUC0.555
78
Membership Inference AttackELD
ROC-AUC0.509
78
Membership Inference AttackELD record-level (test)
TPR @ 0.1% FPR0.02
78
Membership Inference AttackTUH-EEG Record-level (test)
TPR @ 0.1% FPR3
38
Membership Inference AttackTUH-EEG User-level (test)
TPR @ 0% FPR15
11
Showing 6 of 6 rows

Other info

Follow for update