Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Maximizing Local Entropy Where It Matters: Prefix-Aware Localized LLM Unlearning

About

Machine unlearning aims to forget sensitive knowledge from Large Language Models (LLMs) while maintaining general utility. However, existing approaches typically treat all tokens in a response indiscriminately and enforce uncertainty over the entire vocabulary. This global treatment results in unnecessary utility degradation and extends optimization to content-agnostic regions. To address these limitations, we propose PALU (Prefix-Aware Localized Unlearning), a framework driven by a local entropy maximization objective across both temporal and vocabulary dimensions. PALU reveals that (i) suppressing the sensitive prefix alone is sufficient to sever the causal generation link, and (ii) flattening only the top-$k$ logits is adequate to maximize uncertainty in the critical subspace. These findings allow PALU to avoid redundant optimization across the full vocabulary and parameter space while minimizing collateral damage to general model performance. Extensive experiments validate that PALU achieves superior forgetting efficacy and utility preservation compared to state-of-the-art baselines.

Naixin Zhai, Pengyang Shao, Binbin Zheng, Yonghui Yang, Fei Shen, Long Bai, Xun Yang• 2026

Related benchmarks

TaskDatasetResultRank
Machine UnlearningTOFU (5%)
Forget Quality0.9238
45
Machine UnlearningMUSE-News Llama 2 7B
Privacy Leakage-45.9068
27
Machine UnlearningMUSE Books
Privacy Leakage-55.7544
25
Machine UnlearningTOFU 5% forget
Loss0.1434
20
Showing 4 of 4 rows

Other info

Follow for update