Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning

About

Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from the pre-trained model while preserving the model's utilities on other tasks. Several practical methods have recently been proposed for LLM unlearning, mostly based on gradient ascent (GA) on the loss of undesirable data. However, on certain unlearning tasks, these methods either fail to effectively unlearn the target data or suffer from catastrophic collapse -- a drastic degradation of the model's utilities. In this paper, we propose Negative Preference Optimization (NPO), a simple alignment-inspired method that could efficiently and effectively unlearn a target dataset. We theoretically show that the progression toward catastrophic collapse by minimizing the NPO loss is exponentially slower than GA. Through experiments on synthetic data and the benchmark TOFU dataset, we demonstrate that NPO-based methods achieve a better balance between unlearning the undesirable data and maintaining the model's utilities. We also observe that NPO-based methods generate more sensible outputs than GA-based methods, whose outputs are often gibberish. Remarkably, on TOFU, NPO-based methods are the first to achieve reasonable unlearning results in forgetting 50% (or more) of the training data, whereas existing methods already struggle with forgetting 10% of training data.

Ruiqi Zhang, Licong Lin, Yu Bai, Song Mei• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU--
842
Jailbreak AttackHarmBench
Attack Success Rate (ASR)76.88
376
Safety EvaluationHarmBench
Harmbench Score0.06
76
Multi-task Language UnderstandingMMLU (test)
Normalized Accuracy59.6
76
Hierarchical UnlearningMedForget 1.0 (Forget)
Gen Score59.17
72
Machine UnlearningTOFU (5%)
Forget Quality0.68
45
Language UnderstandingMMLU
MMLU Score61.4
45
General CapabilityMTBench
MTBench Score7.79
43
Machine UnlearningTOFU
Forget Quality (FQ)0.0068
43
Over-refusalWildjailbreak (Benign)
Wildjailbreak Benign Refusal Rate43.2
42
Showing 10 of 121 rows
...

Other info

Follow for update