Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Machine Unlearning of Pre-trained Large Language Models

About

This study investigates the concept of the `right to be forgotten' within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models--a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over $10^5$ times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.

Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, Xiang Yue• 2024

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy44.7
825
KnowledgeMMLU
Accuracy26.4
136
Knowledge UnlearningWMDP bio
Accuracy24.7
42
Knowledge UnlearningWMDP cyber
Accuracy26.6
38
Fluency AssessmentWMDP
Mean Fluency1
22
Multimodal Machine UnlearningSAFEERASER Efficacy v1.0 (forget set)
ASR2.7
18
Multimodal Machine UnlearningSAFEERASER Generality v1.0 (test)
ASR1.2
18
Multimodal Machine UnlearningSAFEERASER Model Utility v1.0 (general set)
Specificity56
18
LLM UnlearningRWKU
USR76.2
16
Machine UnlearningMUSE--
16
Showing 10 of 28 rows

Other info

Follow for update