Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Context-Robust Knowledge Editing for Language Models

About

Knowledge editing (KE) methods offer an efficient way to modify knowledge in large language models. Current KE evaluations typically assess editing success by considering only the edited knowledge without any preceding contexts. In real-world applications, however, preceding contexts often trigger the retrieval of the original knowledge and undermine the intended edit. To address this issue, we develop CHED -- a benchmark designed to evaluate the context robustness of KE methods. Evaluations on CHED show that they often fail when preceding contexts are present. To mitigate this shortcoming, we introduce CoRE, a KE method designed to strengthen context robustness by minimizing context-sensitive variance in hidden states of the model for edited knowledge. This method not only improves the editing success rate in situations where a preceding context is present but also preserves the overall capabilities of the model. We provide an in-depth analysis of the differing impacts of preceding contexts when introduced as user utterances versus assistant responses, and we dissect attention-score patterns to assess how specific tokens influence editing success.

Haewon Park, Gyubin Choi, Minjun Kim, Yohan Jo• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge EditingzsRE
Generality46
110
Knowledge EditingCHED
S89
16
Text FluencyCHED and CounterFact
Average Score13.2
16
General Language UnderstandingGeneral Ability Suite (C-QA, T-QA, LAM, MMLU, L-Code)
Average Score35.3
16
Showing 4 of 4 rows

Other info

Code

Follow for update