Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mechanistic Circuit-Based Knowledge Editing in Large Language Models

About

Deploying Large Language Models (LLMs) in real-world dynamic environments raises the challenge of updating their pre-trained knowledge. While existing knowledge editing methods can reliably patch isolated facts, they frequently suffer from a "Reasoning Gap", where the model recalls the edited fact but fails to utilize it in multi-step reasoning chains. To bridge this gap, we introduce MCircKE (\underline{M}echanistic \underline{Circ}uit-based \underline{K}nowledge \underline{E}diting), a novel framework that enables a precise "map-and-adapt" editing procedure. MCircKE first identifies the causal circuits responsible for a specific reasoning task, capturing both the storage of the fact and the routing of its logical consequences. It then surgically update parameters exclusively within this mapped circuit. Extensive experiments on the MQuAKE-3K benchmark demonstrate the effectiveness of the proposed method for multi-hop reasoning in knowledge editing.

Tianyi Zhao, Yinhan He, Wendy Zheng, Chen Chen• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge EditingMQuAKE-3K (test)
Overall M-Acc.50.4168
16
LocalityCSQA
Delta (%)0.24
5
LocalityMMLU
Delta (%)10
5
Knowledge EditingMQuAKE-3K
M-hop Success Rate59.253
4
Showing 4 of 4 rows

Other info

Follow for update