Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

In-Context Editing: Learning Knowledge from Self-Induced Distributions

About

In scenarios where language models must incorporate new information efficiently without extensive retraining, traditional fine-tuning methods are prone to overfitting, degraded generalization, and unnatural language generation. To address these limitations, we introduce Consistent In-Context Editing (ICE), a novel approach leveraging the model's in-context learning capability to optimize toward a contextual distribution rather than a one-hot target. ICE introduces a simple yet effective optimization framework for the model to internalize new knowledge by aligning its output distributions with and without additional context. This method enhances the robustness and effectiveness of gradient-based tuning methods, preventing overfitting and preserving the model's integrity. We analyze ICE across four critical aspects of knowledge editing: accuracy, locality, generalization, and linguistic quality, demonstrating its advantages. Experimental results confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that the integrity of the model is preserved while updating information.

Siyuan Qi, Bangcheng Yang, Kailin Jiang, Xiaobo Wang, Jiaqi Li, Yifan Zhong, Yaodong Yang, Zilong Zheng• 2024

Related benchmarks

TaskDatasetResultRank
VLM EditingA-OKVQA 2022 (test)
Accuracy99
48
Vision-Language Model EditingFVQA 1.0 (test)
Accuracy93
48
Knowledge InsertionWikiData recent
Edit Success Rate100
43
Knowledge modificationWikiData counterfact
Edit Success Rate100
15
Knowledge modificationWikiBio
Edit Success Rate100
15
Continual EditingWikirecent
Edit Success Rate100
5
Continual EditingzsRE
Edit Success Rate100
5
Knowledge InsertionzsRE
Edit Success Rate99.92
5
Knowledge InsertionWikiData recent (test)
Edit Success Rate100
5
Knowledge modificationZSRE (test)
Edit Success Rate100
5
Showing 10 of 10 rows

Other info

Follow for update