Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On Large Language Model Continual Unlearning

About

While large language models have demonstrated impressive performance across various domains and tasks, their security issues have become increasingly severe. Machine unlearning has emerged as a representative approach for model safety and security by removing the influence of undesired data on the target model. However, these methods do not sufficiently consider that unlearning requests in real-world scenarios are continuously emerging, especially in the context of LLMs, which may lead to accumulated model utility loss that eventually becomes unacceptable. Moreover, existing LLM unlearning methods often ignore previous data access limitations due to privacy concerns and copyright protection. Without previous data, the utility preservation during unlearning is much harder. To overcome these challenges, we propose the OOO framework that includes an Orthogonal low-rank adapter (LoRA) for continually unlearning requested data and an Out-Of-Distribution (OOD) detector to measure the similarity between input and unlearning data. The orthogonal LoRA achieves parameter disentanglement among continual unlearning requests. The OOD detector is trained with a novel contrastive entropy loss and utilizes a glocal-aware scoring mechanism. During inference, our OOO framework can decide whether and to what extent to load the unlearning LoRA based on the OOD detector's predicted similarity between the input and the unlearned knowledge. Notably, OOO's effectiveness does not rely on any retained data. We conducted extensive experiments on OOO and state-of-the-art LLM unlearning methods across three tasks and seven datasets. The results indicate that OOO consistently achieves the best unlearning effectiveness and utility preservation, especially when facing continuous unlearning requests. The source codes can be found at https://github.com/GCYZSL/O3-LLM-UNLEARNING.

Chongyang Gao, Lixu Wang, Kaize Ding, Chenkai Weng, Xiao Wang, Qi Zhu• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Unlearning16-task Sequential Unlearning Forgotten Data Avg
CRR82.55
18
Knowledge Unlearning16-task Sequential Unlearning Forgotten Data Last
Context-aware Refusal Rate (CRR)73.03
16
Knowledge RetentionSequential Unlearning 16-task Retained Data Avg
Specificity91.85
9
Knowledge Retention16-task Sequential Unlearning Retained Data Last
Specificity92.85
8
Sequential UnlearningSequential TOFU v1.0 (Request 1)
Sequential Utility (S.U.)12.5
2
Sequential UnlearningSequential TOFU v1.0 (Request 2)
Successive Unlearning (S.U.)15.8
2
Sequential UnlearningSequential TOFU v1.0 (Request 3)
S.U.15.5
2
Showing 7 of 7 rows

Other info

Follow for update