Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Erasing Conceptual Knowledge from Language Models

About

In this work, we introduce Erasure of Language Memory (ELM), a principled approach to concept-level unlearning that operates by matching distributions defined by the model's own introspective classification capabilities. Our key insight is that effective unlearning should leverage the model's ability to evaluate its own knowledge, using the language model itself as a classifier to identify and reduce the likelihood of generating content related to undesired concepts. ELM applies this framework to create targeted low-rank updates that reduce generation probabilities for concept-specific content while preserving the model's broader capabilities. We demonstrate ELM's efficacy on biosecurity, cybersecurity, and literary domain erasure tasks. Comparative evaluation reveals that ELM-modified models achieve near-random performance on assessments targeting erased concepts, while simultaneously preserving generation coherence, maintaining benchmark performance on unrelated tasks, and exhibiting strong robustness to adversarial attacks. Our code, data, and trained models are available at https://elm.baulab.info

Rohit Gandikota, Sheridan Feucht, Samuel Marks, David Bau• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingMT-Bench--
189
Knowledge UnlearningWMDP bio
Accuracy29.8
20
Knowledge SuppressionWMDP cyber
Accuracy27.3
4
Language UnderstandingMMLU
MMLU Knowledge Score56.7
4
Output SpecificityAlpaca
KL Divergence (Specificity)0.067
4
Showing 5 of 5 rows

Other info

Follow for update