Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Model Merging for Knowledge Editing

About

Large Language Models (LLMs) require continuous updates to maintain accurate and current knowledge as the world evolves. While existing knowledge editing approaches offer various solutions for knowledge updating, they often struggle with sequential editing scenarios and harm the general capabilities of the model, thereby significantly hampering their practical applicability. This paper proposes a two-stage framework combining robust supervised fine-tuning (R-SFT) with model merging for knowledge editing. Our method first fine-tunes the LLM to internalize new knowledge fully, then merges the fine-tuned model with the original foundation model to preserve newly acquired knowledge and general capabilities. Experimental results demonstrate that our approach significantly outperforms existing methods in sequential editing while better preserving the original performance of the model, all without requiring any architectural changes. Code is available at: https://github.com/Applied-Machine-Learning-Lab/MM4KE.

Zichuan Fu, Xian Wu, Guojing Li, Yingying Zhang, Yefeng Zheng, Tianshi Ming, Yejing Wang, Wanyu Wang, Xiangyu Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD
F121.1
127
Logical reasoningLogiQA
Accuracy41
84
General Knowledge AssessmentC-Eval
Accuracy79.3
37
Discrete reasoningDROP
Exact Match (EM)10.3
19
Conversational Question AnsweringCoQA
EM60.3
8
Knowledge EditingQAEdit
Reliability37.2
8
Multi-hop Knowledge EditingMQuAKE CF v2
2-hop Score12
6
Showing 7 of 7 rows

Other info

Follow for update