Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Forgetting: Machine Unlearning Elicits Controllable Side Behaviors and Capabilities

About

We consider representation misdirection (RM), a class of LLM unlearning methods that achieves forgetting by manipulating the forget-representations, that is, latent representations of forget samples. Despite being important, the roles of target vectors used in RM, however, remain underexplored. Here, we approach and revisit RM through the lens of the linear representation hypothesis. Specifically, if one can somehow identify a one-dimensional representation corresponding to a high-level concept, the linear representation hypothesis enables linear operations on this concept vector within the forget-representation space. Under this view, we hypothesize that, beyond forgetting, machine unlearning elicits controllable side behaviors and stronger side capabilities corresponding to the high-level concept. Our hypothesis is empirically validated across a wide range of tasks, including behavioral control (e.g., controlling unlearned models' truth, sentiment, and refusal) and capability enhancement (e.g., improving unlearned models' in-context learning capability). Our findings reveal that this fairly attractive phenomenon could be either a hidden risk if misused or a mechanism that can be harnessed for developing models that require stronger capabilities and controllable behaviors.

Tien Dang, The-Hai Nguyen, Dinh Mai Phuong, Nguyen Minh Phuong, Hoang Thanh-Tung, Le-Minh Nguyen, Naoya Inoue• 2026

Related benchmarks

TaskDatasetResultRank
Massive Multitask Language UnderstandingMMLU
Accuracy58.7
31
Multiple-choice Question AnsweringTruthfulQA Multiple-choice
MC1 Score44.9
15
Open-ended generationTruthfulQA Open-ended
BLEU51.2
10
Machine UnlearningWMDP average of biology and cyber
Accuracy0.511
10
Showing 4 of 4 rows

Other info

Follow for update