Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncovering Context Reliance in Unstructured Knowledge Editing

About

Editing Large language models (LLMs) with real-world, unstructured knowledge is essential for correcting and updating their internal parametric knowledge. In this work, we revisit the fundamental next-token prediction (NTP) as a candidate paradigm for unstructured editing. We identify Context Reliance as a critical failure mode of NTP-based approaches, where knowledge acquired from edited text becomes highly dependent on its preceding context, leading to recall failures when that context is absent during inference. This hypothesis is supported by our empirical validation that prepending context during inference recovers knowledge recall. We further theoretically demonstrate that Context Reliance is an inherent consequence of gradient-based optimization, which tends to bind acquired knowledge to a specific aggregated contextual representation. To address this, we propose a simple yet effective COntext-INdependent editing framework (COIN), encouraging model to focus on knowledge within local scope rather than memorizing contextual patterns. Evaluations show that COIN reduces Context Reliance by 45.2% and outperforms strong baselines by 23.6% in editing success rate, highlighting the vital role of mitigating Context Reliance for robust editing.

Zisheng Zhou, Mengqi Zhang, Shiguang Wu, Xiaotian Ye, Chi Zhang, Zhumin Chen, Pengjie Ren• 2026

Related benchmarks

TaskDatasetResultRank
CompletionAKEW Completion
Precision63.44
16
Knowledge EditingUnKEBench
Precision52.17
16
Question AnsweringAKEW
Precision51.31
16
Unstructured Knowledge EditingAKEW Com.
ROUGE-L Precision60.52
16
Unstructured Knowledge EditingAKEW-QA
ROUGE-L Precision41.78
16
Unstructured Knowledge EditingUnKEBench
Precision (ROUGE-L)39.62
16
Knowledge EditingMQuAKE
Average Accuracy0.7589
8
Showing 7 of 7 rows

Other info

Follow for update