Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

About

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger LMs; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method and a decoding method known to mitigate privacy risks for LMs, we show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust. We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning.

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo• 2022

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU (test)
Normalized Accuracy57.3
76
Language UnderstandingMMLU
MMLU Score59.6
45
Machine UnlearningRWKU Llama 3.1 8B (Forget Set)
FB Score67
39
Machine UnlearningMUSE-News Llama 2 7B
Privacy Leakage-99.75
27
Machine UnlearningMUSE Books
Privacy Leakage40.5
25
Machine Unlearning32 target sequences (unlearning set)
EL102
24
Classification9 classification tasks (test)
Accuracy51.9
24
Dialogue4 dialogue tasks (Skill Talk, Empathetic Dialogues, Wizard of Internet, Wizard of Wikipedia) (test)
F1 Score12.3
24
Machine UnlearningTOFU (10%)
Forget Quality (FQ)2.20e-16
23
General Language UnderstandingGeneral LLM Benchmarks (ARC-C, CSQA, HellaSwag, LAMBADA, MMLU, OpenBookQA, PIQA, Winogrande) (test)
ARC-C Accuracy56.9
22
Showing 10 of 47 rows

Other info

Code

Follow for update