Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Auditing Language Model Unlearning via Information Decomposition

About

We expose a critical limitation in current approaches to machine unlearning in language models: despite the apparent success of unlearning algorithms, information about the forgotten data remains linearly decodable from internal representations. To systematically assess this discrepancy, we introduce an interpretable, information-theoretic framework for auditing unlearning using Partial Information Decomposition (PID). By comparing model representations before and after unlearning, we decompose the mutual information with the forgotten data into distinct components, formalizing the notions of unlearned and residual knowledge. Our analysis reveals that redundant information, shared across both models, constitutes residual knowledge that persists post-unlearning and correlates with susceptibility to known adversarial reconstruction attacks. Leveraging these insights, we propose a representation-based risk score that can guide abstention on sensitive inputs at inference time, providing a practical mechanism to mitigate privacy leakage. Our work introduces a principled, representation-level audit for unlearning, offering theoretical insight and actionable tools for safer deployment of language models.

Anmol Goel, Alan Ritter, Iryna Gurevych• 2026

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU
Forget Quality (FQ)0.83
43
Showing 1 of 1 rows

Other info

Follow for update