Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ChainEdit: Propagating Ripple Effects in LLM Knowledge Editing through Logical Rule-Guided Chains

About

Current knowledge editing methods for large language models (LLMs) struggle to maintain logical consistency when propagating ripple effects to associated facts. We propose ChainEdit, a framework that synergizes knowledge graph-derived logical rules with LLM logical reasoning capabilities to enable systematic chain updates. By automatically extracting logical patterns from structured knowledge bases and aligning them with LLMs' internal logics, ChainEdit dynamically generates and edits logically connected knowledge clusters. Experiments demonstrate an improvement of more than 30% in logical generalization over baselines while preserving editing reliability and specificity. We further address evaluation biases in existing benchmarks through knowledge-aware protocols that disentangle external dependencies. This work establishes new state-of-the-art performance on ripple effect while ensuring internal logical consistency after knowledge editing.

Zilu Dong, Xiangqing Shen, Zinong Yang, Rui Xia• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge Editingreplaced
Reliability100
16
Knowledge Editingin-prompt
Reliability1
16
Knowledge Editingfiltered dataset original (test)
Reliability1
16
Knowledge EditingRIPPLEEDITS single-instance
Reliability100
16
Showing 4 of 4 rows

Other info

Code

Follow for update