Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond

About

The LLM unlearning technique has recently been introduced to comply with data regulations and address the safety and ethical concerns of LLMs by removing the undesired data-model influence. However, state-of-the-art unlearning methods face a critical vulnerability: they are susceptible to ``relearning'' the removed information from a small number of forget data points, known as relearning attacks. In this paper, we systematically investigate how to make unlearned models robust against such attacks. For the first time, we establish a connection between robust unlearning and sharpness-aware minimization (SAM) through a unified robust optimization framework, in an analogy to adversarial training designed to defend against adversarial attacks. Our analysis for SAM reveals that smoothness optimization plays a pivotal role in mitigating relearning attacks. Thus, we further explore diverse smoothing strategies to enhance unlearning robustness. Extensive experiments on benchmark datasets, including WMDP and MUSE, demonstrate that SAM and other smoothness optimization approaches consistently improve the resistance of LLM unlearning to relearning attacks. Notably, smoothness-enhanced unlearning also helps defend against (input-level) jailbreaking attacks, broadening our proposal's impact in robustifying LLM unlearning. Codes are available at https://github.com/OPTML-Group/Unlearn-Smooth.

Chongyu Fan, Jinghan Jia, Yihua Zhang, Anil Ramakrishna, Mingyi Hong, Sijia Liu• 2025

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy27.9
321
UnlearningMUSE-News 1.0 (test)
Privacy Leak0.00e+0
55
Machine UnlearningMUSE Books--
35
General KnowledgeHellaSwag
Accuracy57
27
UnlearningMUSE-Books 1.0 (test)
Unlearn Score74.8
24
Relearn AttackMUSE NEWS
Verb Memory (Df)55.13
24
Machine UnlearningWMDP
Unlearn Score72.1
16
Machine UnlearningMUSE-Books Relearn 50%
Forgetting Score (No VerbMem)16.574
15
Machine UnlearningMUSE-Books RELEARN-25%
Forgetting Rate (VerbMem)17.424
15
Tamper Resistance EvaluationAdversarial Fine-tuning Bio-risk
Max Unique Examples20
11
Showing 10 of 21 rows

Other info

Follow for update