Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning

About

This work studies the problem of large language model (LLM) unlearning, aiming to remove unwanted data influences (e.g., copyrighted or harmful content) while preserving model utility. Despite the increasing demand for unlearning, a technically-grounded optimization framework is lacking. Gradient ascent (GA)-type methods, though widely used, are suboptimal as they reverse the learning process without controlling optimization divergence (i.e., deviation from the pre-trained state), leading to risks of over-forgetting and potential model collapse. Negative preference optimization (NPO) has been proposed to address this issue and is considered one of the state-of-the-art LLM unlearning approaches. In this work, we revisit NPO and identify another critical issue: reference model bias. This bias arises from using the reference model (i.e., the model prior to unlearning) to evaluate the unlearning success, which can compromise NPO's effectiveness. Specifically, it leads to (a) uneven allocation of optimization power across forget data with varying difficulty levels and (b) ineffective gradient weight smoothing during the early stages of unlearning optimization. To overcome these challenges, we propose a simple yet effective unlearning optimization framework, called SimNPO, showing that `simplicity' in removing the reliance on a reference model (through the lens of simple preference optimization) benefits unlearning. We provide deeper insights into SimNPO's advantages through an analysis based on mixtures of Markov chains. Extensive experiments further validate SimNPO's efficacy on benchmarks like TOFU and MUSE, as well as its robustness against relearning attacks. Codes are available at https://github.com/OPTML-Group/Unlearn-Simple.

Chongyu Fan, Jiancheng Liu, Licong Lin, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu• 2024

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU (5%)
Forget Quality0.6284
45
Language UnderstandingMMLU
MMLU Score59.6
45
Machine UnlearningTOFU
Forget Quality (FQ)0.0315
43
UnlearningSyntax-preserving dataset (forget set)
PrivLeak0.54
40
General Knowledge EvaluationMMMLU
MMMLU General Knowledge Accuracy51.84
29
Machine UnlearningMUSE-News Llama 2 7B
Privacy Leakage-99.8951
27
Machine UnlearningMUSE Books
Privacy Leakage-51.7018
25
Machine UnlearningTOFU (10%)
Forget Quality (FQ)0.45
23
Machine UnlearningWMDP Cyber (test)
MMLU54.25
21
Question AnsweringMMLU
Accuracy48.26
21
Showing 10 of 30 rows

Other info

Follow for update