Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can We Edit Multimodal Large Language Models?

About

In this paper, we focus on editing Multimodal Large Language Models (MLLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful consideration in the editing process. To facilitate research in this area, we construct a new benchmark, dubbed MMEdit, for editing multimodal LLMs and establishing a suite of innovative metrics for evaluation. We conduct comprehensive experiments involving various model editing baselines and analyze the impact of editing different components for multimodal LLMs. Empirically, we notice that previous baselines can implement editing multimodal LLMs to some extent, but the effect is still barely satisfactory, indicating the potential difficulty of this task. We hope that our work can provide the NLP community with insights. Code and dataset are available in https://github.com/zjunlp/EasyEdit.

Siyuan Cheng, Bozhong Tian, Qingbin Liu, Xi Chen, Yongheng Wang, Huajun Chen, Ningyu Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Lifelong Knowledge EditingE-VQA Lifelong Sequential
Rel. Score93.88
72
Knowledge EditingMMEdit E-VQA
Reliability93.88
61
Knowledge EditingVLKEB
Reliability94.29
45
Knowledge EditingMMEdit E-IC
Reliability67.4
16
Lifelong Knowledge EditingVLKEB Lifelong Sequential
Reliability94.29
12
Multimodal Knowledge EditingMMEdit 10-step sequential editing on VQA
Reliability67.8
12
Knowledge EditingMMEdit One-Step Editing
Reliability64.3
7
Lifelong Knowledge EditingE-IC Lifelong Sequential
Relational Score73.48
6
Cross-task Knowledge EditingMMEdit cross-task
Rel. Score65
6
Image Caption EditingMMEdit 10-step sequential editing
Relevance65.3
6
Showing 10 of 14 rows

Other info

Follow for update