Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Model Unlearning

About

We study how to perform unlearning, i.e. forgetting undesirable misbehaviors, on large language models (LLMs). We show at least three scenarios of aligning LLMs with human preferences can benefit from unlearning: (1) removing harmful responses, (2) erasing copyright-protected content as requested, and (3) reducing hallucinations. Unlearning, as an alignment technique, has three advantages. (1) It only requires negative (e.g. harmful) examples, which are much easier and cheaper to collect (e.g. via red teaming or user reporting) than positive (e.g. helpful and often human-written) examples required in RLHF (RL from human feedback). (2) It is computationally efficient. (3) It is especially effective when we know which training samples cause the misbehavior. To the best of our knowledge, our work is among the first to explore LLM unlearning. We are also among the first to formulate the settings, goals, and evaluations in LLM unlearning. We show that if practitioners only have limited resources, and therefore the priority is to stop generating undesirable outputs rather than to try to generate desirable outputs, unlearning is particularly appealing. Despite only having negative samples, our ablation study shows that unlearning can still achieve better alignment performance than RLHF with just 2% of its computational time.

Yuanshun Yao, Xiaojun Xu, Yang Liu• 2023

Related benchmarks

TaskDatasetResultRank
Hierarchical UnlearningMedForget 1.0 (Forget)
Gen Score57.27
72
UnlearningMUSE-News 1.0 (test)
Exact Memoriation0.00e+0
46
Machine UnlearningTOFU (5%)
Forget Quality0.2705
45
Machine UnlearningTOFU
Forget Quality (FQ)0.0541
43
LLM UnlearningTOFU 5% (forget set)
FQ-11.511
25
Hierarchical UnlearningMedForget 1.0 (Retain)
Generation Score50.27
24
Machine UnlearningTOFU 5% forget
Loss0.623
20
Machine UnlearningTOFU 10% forget set 1.0
FQ-239
18
Machine UnlearningTOFU Llama-3.2 Instruct (average of 1%, 5%, 10% forget sets)
FQ Score-81.257
18
Machine UnlearningTOFU 1% (forget set)
FQ-1.953
18
Showing 10 of 31 rows

Other info

Follow for update