Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Blind Baselines Beat Membership Inference Attacks for Foundation Models

About

Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks are often used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 8 published MI evaluation datasets, we show that blind attacks -- that distinguish the member and non-member distributions without looking at any trained model -- outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data.

Debeshee Das, Jie Zhang, Florian Tram\`er• 2024

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackXSum (test)
AUC0.515
43
Membership Inference AttackAG News (test)
AUC0.508
43
Membership Inference AttackGitHub
AUC0.656
26
Membership Inference AttackHackerNews
AUC0.527
26
Membership Inference AttackarXiv
AUC51.9
26
Membership Inference AttackPubMed Central
AUC0.489
26
Membership Inference AttackWikipedia en
AUC0.471
26
Membership Inference AttackPile-CC
AUC0.48
26
Membership Inference AttackWikiText-103
AUC0.502
14
Membership Inference AttackAmazon Reviews
AUC0.492
14
Showing 10 of 13 rows

Other info

Follow for update