Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Extracting Training Data from Large Language Models

About

It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data. We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel• 2020

Related benchmarks

TaskDatasetResultRank
Text Membership Inference AttackLLaVA LLM Pre-training
AUC0.603
88
Suffix RankingExtraction Challenge Dataset
MP (%)50.4
66
Membership InferenceWikiMIA 32 tokens 1.0
ROC AUC71.1
66
Membership Inference AttackAsclepius (fine-tuned)
TPR@FPR=0.012.71
58
Membership Inference AttackWikipedia
AUC0.664
52
Text Membership Inference AttackLLaVA VLLM Tuning
AUC0.986
44
Membership Inference AttackXSum (test)
AUC0.788
43
Membership Inference AttackAG News (test)
AUC0.661
43
Membership Inference AttackMedInstruct
AUC92.1
36
Membership Inference AttackWikipedia Pythia
ROC AUC64
36
Showing 10 of 101 rows
...

Other info

Follow for update