Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Closer Look at Machine Unlearning for Large Language Models

About

Large language models (LLMs) may memorize sensitive or copyrighted content, raising privacy and legal concerns. Due to the high cost of retraining from scratch, researchers attempt to employ machine unlearning to remove specific content from LLMs while preserving the overall performance. In this paper, we discuss several issues in machine unlearning for LLMs and provide our insights on possible approaches. To address the issue of inadequate evaluation of model outputs after unlearning, we introduce three additional metrics to evaluate token diversity, sentence semantics, and factual correctness. We then categorize unlearning methods into untargeted and targeted, and discuss their issues respectively. Specifically, the behavior that untargeted unlearning attempts to approximate is unpredictable and may involve hallucinations, and existing regularization is insufficient for targeted unlearning. To alleviate these issues, we propose using the objective of maximizing entropy (ME) for untargeted unlearning and incorporate answer preservation (AP) loss as regularization for targeted unlearning. Experimental results across three scenarios, i.e., fictitious unlearning, continual unlearning, and real-world unlearning, demonstrate the effectiveness of our approaches. The code is available at https://github.com/sail-sg/closer-look-LLM-unlearning.

Xiaojian Yuan, Tianyu Pang, Chao Du, Kejiang Chen, Weiming Zhang, Min Lin• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU--
842
Multi-task Language UnderstandingMMLU (test)
Normalized Accuracy60.8
76
KnowledgeMMLU
Accuracy47.1
71
Language UnderstandingMMLU
MMLU Score60.8
45
Machine UnlearningRWKU Llama 3.1 8B (Forget Set)
FB Score64.4
39
Machine UnlearningMUSE-News Llama 2 7B
Privacy Leakage-99.75
27
Hallucination DetectionHaluEval Dialogue latest (test)
Accuracy45.5
22
Knowledge EvaluationNatural Questions (NQ) (Evaluation)
Accuracy5.7
22
Machine UnlearningRWKU Llama 3.1 8B (Neighbor Set)
FB74.5
15
Knowledge UnlearningInternal e-commerce benchmark medium-scale seller 387 items (Forget Set)
ROUGE89.4
14
Showing 10 of 29 rows

Other info

Follow for update