Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

About

Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora. However, this poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods that remove sensitive data without retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn by reducing the likelihood of generating unwanted content, it leads to unstable optimization and catastrophic forgetting of retrained knowledge. We find that combining GA with low-rank adaptation results in poor trade-offs between computational cost and generative performance. To address these challenges, we propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge Loss, which suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token. Second, we develop a data-adaptive initialization for LoRA adapters via low-rank approximation weighted with relative Fisher information, thereby focusing updates on parameters critical for removing targeted knowledge. Experiments on the Training Data Extraction Challenge dataset using GPT-Neo models as well as on the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our approach effectively removes sensitive information while maintaining reasoning and generative capabilities with minimal impact. Our implementation can be found in https://github.com/csm9493/efficient-llm-unlearning.

Sungmin Cha, Sungjun Cho, Dasol Hwang, Moontae Lee• 2024

Related benchmarks

TaskDatasetResultRank
LLM UnlearningRWKU
USR84.3
16
LLM UnlearningEDU-RELAT
USR87.6
8
LLM UnlearningKnowUnDo
USR88.2
8
LLM UnlearningTOFU
USR85
8
Showing 4 of 4 rows

Other info

Follow for update