Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport
About
Instruction-following large language models (LLMs), such as ChatGPT, have become widely popular among everyday users. However, these models inadvertently disclose private, sensitive information to their users, underscoring the need for machine unlearning techniques to remove selective information from the models. While prior work has focused on forgetting small, random subsets of training data at the instance-level, we argue that real-world scenarios often require the removal of an entire user data, which may require a more careful maneuver. In this study, we explore entity-level unlearning, which aims to erase all knowledge related to a target entity while preserving the remaining model capabilities. To address this, we introduce Opt-Out, an optimal transport-based unlearning method that utilizes the Wasserstein distance from the model's initial parameters to achieve more effective and fine-grained unlearning. We also present the first Entity-Level Unlearning Dataset (ELUDe) designed to evaluate entity-level unlearning. Our empirical results demonstrate that Opt-Out surpasses existing methods, establishing a new standard for secure and adaptable LLMs that can accommodate user data removal requests without the need for full retraining.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Unlearning | TOFU (5%) | Forget Quality86.6 | 45 | |
| General Language Understanding | General LLM Benchmarks (ARC-C, CSQA, HellaSwag, LAMBADA, MMLU, OpenBookQA, PIQA, Winogrande) (test) | ARC-C Accuracy58.9 | 22 | |
| Machine Unlearning | ELUDe (forget) | FQ87.8 | 22 | |
| Machine Unlearning | ELUDe (retain and world) | RQ49.4 | 22 | |
| Machine Unlearning | WMDP Cyber (test) | MMLU60.63 | 21 | |
| Membership Inference Attack | Single-target entity-level unlearning Forget vs Test samples paraphrased (test) | Mean Success Rate49.1 | 20 | |
| Machine Unlearning | MUSE-Books Harry Potter v1.0 (Overall) | R-Forget31.02 | 17 | |
| Unlearning | MUSE-Books Harry Potter 100 samples (forget set) | R-Forget31.02 | 13 | |
| Unlearning | MUSE-Books Harry Potter forget set 500 samples | R-Forget-50037.75 | 13 | |
| Knowledge Preservation and Reasoning | MMLU | MMLU Score60.48 | 13 |