Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TOFU: A Task of Fictitious Unlearning for LLMs

About

Large language models trained on massive corpora of data from the web can memorize and reproduce sensitive or private data raising both legal and ethical concerns. Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training. Although several methods exist for such unlearning, it is unclear to what extent they result in models equivalent to those where the data to be forgotten was never learned in the first place. To address this challenge, we present TOFU, a Task of Fictitious Unlearning, as a benchmark aimed at helping deepen our understanding of unlearning. We offer a dataset of 200 diverse synthetic author profiles, each consisting of 20 question-answer pairs, and a subset of these profiles called the forget set that serves as the target for unlearning. We compile a suite of metrics that work together to provide a holistic picture of unlearning efficacy. Finally, we provide a set of baseline results from existing unlearning algorithms. Importantly, none of the baselines we consider show effective unlearning motivating continued efforts to develop approaches for unlearning that effectively tune models so that they truly behave as if they were never trained on the forget data at all.

Pratyush Maini, Zhili Feng, Avi Schwarzschild, Zachary C. Lipton, J. Zico Kolter• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy3.9
1362
Multi-task Language UnderstandingMMLU
Accuracy62.7
876
Multi-turn Dialogue EvaluationMT-Bench
Overall Score7.52
447
Multi-task Language UnderstandingMMLU
Accuracy25.9
321
KnowledgeMMLU
Accuracy57.8
136
Machine UnlearningTOFU (5%)
Forget Quality1
59
Multi-task Language UnderstandingMMLU
MMLU Accuracy0.9
59
UnlearningMUSE-News 1.0 (test)
Privacy Leak0.001
55
Machine UnlearningTOFU Forget01 (1% authors)
Forget Quality (Rouge-L)0.96
48
General Knowledge EvaluationMMLU--
45
Showing 10 of 114 rows
...

Other info

Follow for update