Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Investigating Model Editing for Unlearning in Large Language Models

About

Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for LLMs with large numbers of parameters or fail to fully remove the intended information without degrading performance on knowledge that should be retained. Model editing algorithms solve a similar problem of changing information in models, but they focus on redirecting inputs to a new target rather than removing that information altogether. In this work, we explore the editing algorithms ROME, IKE, and WISE and design new editing targets for an unlearning setting. Through this investigation, we show that model editing approaches can exceed baseline unlearning methods in terms of quality of forgetting depending on the setting. Like traditional unlearning techniques, they struggle to encapsulate the scope of what is to be unlearned without damage to the overall model performance.

Shariqah Hossain, Lalana Kagal• 2025

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU 1.0 (Retain Set)
ROUGE-L98.2
24
Machine Unlearning EvaluationTOFU (Real Authors)
ROUGE93.3
14
Machine Unlearning EvaluationTOFU (Forget)
ROUGE0.986
14
Machine Unlearning EvaluationTOFU Real World
ROUGE87.5
14
Machine Unlearning EvaluationTOFU
Model Utility61.2
12
Showing 5 of 5 rows

Other info

Follow for update