Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models

About

Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models. Machine unlearning is a promising solution for efficiently removing specific knowledge by post hoc modifying models. In this paper, we propose a Real-World Knowledge Unlearning benchmark (RWKU) for LLM unlearning. RWKU is designed based on the following three key factors: (1) For the task setting, we consider a more practical and challenging unlearning setting, where neither the forget corpus nor the retain corpus is accessible. (2) For the knowledge source, we choose 200 real-world famous people as the unlearning targets and show that such popular knowledge is widely present in various LLMs. (3) For the evaluation framework, we design the forget set and the retain set to evaluate the model's capabilities across various real-world applications. Regarding the forget set, we provide four four membership inference attack (MIA) methods and nine kinds of adversarial attack probes to rigorously test unlearning efficacy. Regarding the retain set, we assess locality and utility in terms of neighbor perturbation, general ability, reasoning ability, truthfulness, factuality, and fluency. We conduct extensive experiments across two unlearning scenarios, two models and six baseline methods and obtain some meaningful findings. We release our benchmark and code publicly at http://rwku-bench.github.io for future work.

Zhuoran Jin, Pengfei Cao, Chenhao Wang, Zhitao He, Hongbang Yuan, Jiachun Li, Yubo Chen, Kang Liu, Jun Zhao• 2024

Related benchmarks

TaskDatasetResultRank
General Language Model EvaluationUtility Set MMLU, BBH, TruthfulQA, TriviaQA, AlpacaEval
MMLU68.79
34
Knowledge UnlearningRWKU (Forget Set)
FB64.44
23
Membership Inference AttackRWKU MIA Set
FM Score7.1544
17
Membership Inference AttackTOFU MIA Set
FM2.1448
17
UnlearningTOFU Neighbor Set
FB Score63
17
UnlearningTOFU Forget Set
FB65.97
17
Knowledge RetentionRWKU (Neighbor Set)
FB Score61.03
17
Knowledge RetentionRWKU Famous People Neighbor Set
FB Score49.9
7
Utility PreservationRWKU Famous People Utility Set
GA67.6
7
Machine UnlearningRWKU Famous People Forget Set
FB Score45.4
7
Showing 10 of 11 rows

Other info

Code

Follow for update