Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

About

As Large Language Models (LLMs) demonstrate extensive capability in learning from documents, LLM unlearning becomes an increasingly important research area to address concerns of LLMs in terms of privacy, copyright, etc. A conventional LLM unlearning task typically involves two goals: (1) The target LLM should forget the knowledge in the specified forget documents, and (2) it should retain the other knowledge that the LLM possesses, for which we assume access to a small number of retain documents. To achieve both goals, a mainstream class of LLM unlearning methods introduces an optimization framework with a combination of two objectives - maximizing the prediction loss on the forget documents while minimizing that on the retain documents, which suffers from two challenges, degenerated output and catastrophic forgetting. In this paper, we propose a novel unlearning framework called Unlearning from Logit Difference (ULD), which introduces an assistant LLM that aims to achieve the opposite of the unlearning goals: remembering the forget documents and forgetting the retain knowledge. ULD then derives the unlearned LLM by computing the logit difference between the target and the assistant LLMs. We show that such reversed objectives would naturally resolve both aforementioned challenges while significantly improving the training efficiency. Extensive experiments demonstrate that our method efficiently achieves the intended forgetting while preserving the LLM's overall capabilities, reducing training time by more than threefold. Notably, our method loses 0% of model utility on the ToFU benchmark, whereas baseline methods may sacrifice 17% of utility on average to achieve comparable forget quality. Our code will be publicly available at https://github.com/UCSB-NLP-Chang/ULD.

Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang• 2024

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU (5%)
Forget Quality0.73
45
Language ModelingWikiText (held-out)
Perplexity (PPL)9.95
25
Machine UnlearningTOFU (10%)
Forget Quality (FQ)0.48
23
Reasoning and Question AnsweringStandard LLM Benchmarks (BoolQ, RTE, HellaSWAG, ARC, OpenBookQA, PiQA)
Avg Accuracy66.85
15
Text GenerationHarry Potter forget data (400 chunks)
BLEU0.67
15
Machine UnlearningTOFU (1%)
Forget Quality (FQ)0.99
15
Machine UnlearningTOFU Llama 3.1 8B (5%)
FQ0.169
12
Machine UnlearningTOFU Llama 3.1 8B (10%)
Forgetting Quality0.012
11
Showing 8 of 8 rows

Other info

Code

Follow for update