Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Disentangling Knowledge Representations for Large Language Model Editing

About

Knowledge Editing has emerged as a promising solution for efficiently updating embedded knowledge in large language models (LLMs). While existing approaches demonstrate effectiveness in integrating new knowledge and preserving the original capabilities of LLMs, they fail to maintain fine-grained irrelevant knowledge, namely facts that share the same subject as edited knowledge but differ in relation and object. This challenge arises because subject representations inherently encode multiple attributes, causing the target and fine-grained irrelevant knowledge to become entangled in the representation space, and thus vulnerable to unintended alterations during editing. To address this, we propose DiKE, a novel approach that Disentangles Knowledge representations for LLM Editing (DiKE). DiKE consists of two key components: a Knowledge Representation Disentanglement (KRD) module that decomposes the subject representation into target-knowledge-related and -unrelated components, and a Disentanglementbased Knowledge Edit (DKE) module that updates only the target-related component while explicitly preserving the unrelated one. We further derive a closedform, rank-one parameter update based on matrix theory to enable efficient and minimally invasive edits. To rigorously evaluate fine-grained irrelevant knowledge preservation, we construct FINE-KED, a new benchmark comprising fine-grained irrelevant knowledge at different levels of relational similarity to the edited knowledge. Extensive experiments across multiple LLMs demonstrate that DiKE substantially improves fine-grained irrelevant knowledge preservation while maintaining competitive general editing performance.

Mengqi Zhang, Zisheng Zhou, Xiaotian Ye, Qiang Liu, Zhaochun Ren, Zhumin Chen, Pengjie Ren• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge EditingCounterFact
Efficacy99.9
301
Knowledge EditingLLaMA 3
Average Runtime11.34
12
Knowledge EditingFINE-KED GPT2-XL (test)
Efficacy97.4
9
Knowledge EditingFINE-KED LLaMA-3 (test)
Efficacy99.1
9
Knowledge EditingFINE-KED GPT-J (test)
Efficacy99.1
9
Multi-hop Knowledge EditingMQuAKE
Average Accuracy44.39
5
Showing 6 of 6 rows

Other info

Follow for update