Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

About

Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora. However, this poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods that remove sensitive data without retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn by reducing the likelihood of generating unwanted content, it leads to unstable optimization and catastrophic forgetting of retrained knowledge. We find that combining GA with low-rank adaptation results in poor trade-offs between computational cost and generative performance. To address these challenges, we propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge Loss, which suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token. Second, we develop a data-adaptive initialization for LoRA adapters via low-rank approximation weighted with relative Fisher information, thereby focusing updates on parameters critical for removing targeted knowledge. Experiments on the Training Data Extraction Challenge dataset using GPT-Neo models as well as on the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our approach effectively removes sensitive information while maintaining reasoning and generative capabilities with minimal impact. Our implementation can be found in https://github.com/csm9493/efficient-llm-unlearning.

Sungmin Cha, Sungjun Cho, Dasol Hwang, Moontae Lee• 2024

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU Forget01 (1% authors)
Forget Quality (Rouge-L)0.99
48
Machine UnlearningTOFU Forget05 (5% authors)
Forget Quality (ROUGE-L)0.99
42
Machine UnlearningTOFU Forget10 (10% authors split)
Forget Quality - Rouge-L0.99
42
Privacy-preserving unlearningTDEC
EL10 (%)1.9
37
Machine UnlearningTOFU 1.0 (forget01)
MU Score52
33
Machine UnlearningTOFU Forget01 Phi-1.5B model (1%)
Forget Quality (Rouge-L)93
24
Machine UnlearningTOFU Forget10 Phi-1.5B model
Forget Quality (FQ)2.37e-6
24
Machine UnlearningTOFU Forget05 Phi-1.5B model (5%)
Forget Quality (Rouge-L)0.46
20
LLM UnlearningRWKU
USR84.3
16
Language Model UnlearningTOFU Forget10
Forget Quality (FQ)100
15
Showing 10 of 16 rows

Other info

Follow for update