Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

WIN-U: Woodbury-Informed Newton-Unlearning as a retain-free Machine Unlearning Framework

About

Privacy concerns in LLMs have led to the rapidly growing need to enforce a data's "right to be forgotten". Machine unlearning addresses precisely this task, namely the removal of the influence of some specific data, i.e., the forget set, from a trained model. The gold standard for unlearning is to produce the model that would have been learned on only the rest of the training data, i.e., the retain set. Most existing unlearning methods rely on direct access to the retained data, which may not be practical due to privacy or cost constraints. We propose WIN-U, a retained-data free unlearning framework that requires only second order information for the originally trained model on the full data. The unlearning is performed using a single Newton-style step. Using the Woodbury matrix identity and a generalized Gauss-Newton approximation for the forget set curvature, the WIN-U update recovers the closed-form linear solution and serves as a local second-order approximation to the gold-standard retraining optimum. Extensive experiments on various vision and language benchmarks demonstrate that WIN-U achieves SOTA performance in terms of unlearning efficacy and utility preservation, while being more robust against relearning attacks compared to existing methods. Importantly, WIN-U does not require access to the retained data.

Xingjian Zhao, Mohammad Mohammadi Amiri, Malik Magdon-Ismail• 2026

Related benchmarks

TaskDatasetResultRank
LLM UnlearningMMLU--
30
Machine UnlearningMUSE (forget set (Df) and retain set (Dr))--
15
Machine UnlearningTOFU Forget10
QA Probability (Pre-Unlearning)22.6
8
Knowledge UnlearningMUSE (forget set Df)
VerbMem Df Pre34.7
8
Machine UnlearningWMDP cyber
Accuracy (Pre-Unlearning)37.6
7
Machine UnlearningWMDP bio
Accuracy (Pre-Unlearning)60
7
Showing 6 of 6 rows

Other info

Follow for update