Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

About

LLM unlearning is a technique to remove the impacts of undesirable knowledge from the model without retraining from scratch, which is indispensable towards trustworthy AI. Existing unlearning methods face significant limitations: conventional tuning-based unlearning is computationally heavy and prone to catastrophic forgetting. In contrast, in-contextualized unlearning is lightweight for precise unlearning but vulnerable to prompt removal or reverse engineering attacks. In response, we propose Distilled Unlearning from an Efficient Teacher (DUET), a novel distillation-based unlearning method that combines the merits of these two lines of work. It learns a student model to imitate the behavior of a prompt-steered teacher that effectively refuses undesirable knowledge generation while preserving general domain knowledge. Extensive evaluations on existing benchmarks with our enriched evaluation protocols demonstrate that DUET achieves higher performance in both forgetting and utility preservation, while being orders of magnitude more data-efficient than state-of-the-art unlearning methods.

Yisheng Zhong, Zhengbang Yang, Zhuangdi Zhu• 2026

Related benchmarks

TaskDatasetResultRank
Machine UnlearningWMDP Cyber (test)
MMLU60.65
21
Machine UnlearningMUSE-Books Harry Potter v1.0 (Overall)
R-Forget4.27
17
Knowledge Preservation and ReasoningMMLU
MMLU Score61.45
13
Utility PreservationMUSE-Books Harry Potter (retain set)
R-Retain78.33
13
UnlearningMUSE-Books Harry Potter 100 samples (forget set)
R-Forget4.27
13
UnlearningMUSE-Books Harry Potter forget set 500 samples
R-Forget-5005.98
13
Knowledge UnlearningWMDP Bio (test)
Accuracy Forget29.4
11
Machine UnlearningMUSE-Books Harry Potter
R-forget19.57
8
Showing 8 of 8 rows

Other info

Follow for update