Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neuron-Level Sequential Editing for Large Language Models

About

This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model outputs without the need for costly retraining. Existing model editing methods, especially those that alter model parameters, typically focus on single-round editing and often face significant challenges in sequential model editing-most notably issues of model forgetting and failure. To address these challenges, we introduce a new model editing method, namely \textbf{N}euron-level \textbf{S}equential \textbf{E}diting (NSE), tailored for supporting sequential model editing. Specifically, we optimize the target layer's hidden states using the model's original weights to prevent model failure. Furthermore, we iteratively select neurons in multiple layers for editing based on their activation values to mitigate model forgetting. Our empirical experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods, marking a substantial advancement in the field of sequential model editing. Our code is released on \url{https://github.com/jianghoucheng/NSE}.

Houcheng Jiang, Junfeng Fang, Tianyu Zhang, An Zhang, Ruipeng Wang, Tao Liang, Xiang Wang• 2024

Related benchmarks

TaskDatasetResultRank
Sequential Knowledge EditingCounterFact sequential editing 10,000 Samples
Efficacy Success88.95
33
Sequential Knowledge EditingZsRE sequential editing 10,000 Samples
Efficacy Success (Eff)45.61
33
Sequential Model EditingCounterFact
Efficacy99.55
24
Sequential Model EditingzsRE
Efficacy96.87
24
Showing 4 of 4 rows

Other info

Code

Follow for update