Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Privacy Risks of Model Explanations

About

Privacy and transparency are two key foundations of trustworthy machine learning. Model explanations offer insights into a model's decisions on input data, whereas privacy is primarily concerned with protecting information about the training data. We analyze connections between model explanations and the leakage of sensitive information about the model's training set. We investigate the privacy risks of feature-based model explanations using membership inference attacks: quantifying how much model predictions plus their explanations leak information about the presence of a datapoint in the training set of a model. We extensively evaluate membership inference attacks based on feature-based model explanations, over a variety of datasets. We show that backpropagation-based explanations can leak a significant amount of information about individual training datapoints. This is because they reveal statistical information about the decision boundaries of the model about an input, which can reveal its membership. We also empirically investigate the trade-off between privacy and explanation quality, by studying the perturbation-based model explanations.

Reza Shokri, Martin Strobel, Yair Zick• 2019

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackCIFAR-10 (test)
TPR@0.1%FPR30
42
Membership Inference AttackGTSRB (test)
TPR@0.1%FPR0.3
42
Membership Inference AttackCIFAR100 (test)
AUC0.843
37
Membership Inference AttackCIFAR-100 (test)
TPR@0.1%FPR60
21
Showing 4 of 4 rows

Other info

Follow for update