Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Membership Inference Attacks against Machine Learning Models

About

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.

Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov• 2016

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackCIFAR-10
AUC0.6724
48
Membership InferenceCLIP image-text (train)
Precision79.37
36
Membership Inference AttackCIFAR-100 balanced (evaluation set)
AUROC76.58
36
GRN AttackAdult Income
MSE0.231
16
GRN AttackCIFAR10
MSE0.121
16
GRN AttackDrive Diagnosis
MSE0.144
16
Membership Inference AttackBook-Crossing
Accuracy53.86
15
Membership Inference AttackMovieLens
Accuracy60
15
Feature Inference Attack (GRN)CIFAR100
MSE0.013
8
Feature Inference Attack (GRN)MNIST
MSE0.104
8
Showing 10 of 18 rows

Other info

Follow for update