Can We Edit Multimodal Large Language Models?
About
In this paper, we focus on editing Multimodal Large Language Models (MLLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful consideration in the editing process. To facilitate research in this area, we construct a new benchmark, dubbed MMEdit, for editing multimodal LLMs and establishing a suite of innovative metrics for evaluation. We conduct comprehensive experiments involving various model editing baselines and analyze the impact of editing different components for multimodal LLMs. Empirically, we notice that previous baselines can implement editing multimodal LLMs to some extent, but the effect is still barely satisfactory, indicating the potential difficulty of this task. We hope that our work can provide the NLP community with insights. Code and dataset are available in https://github.com/zjunlp/EasyEdit.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Lifelong Knowledge Editing | E-VQA Lifelong Sequential | Rel. Score93.88 | 72 | |
| Knowledge Editing | MMEdit E-VQA | Reliability93.88 | 61 | |
| Knowledge Editing | VLKEB | Reliability94.29 | 45 | |
| Knowledge Editing | MMEdit E-IC | Reliability67.4 | 16 | |
| Lifelong Knowledge Editing | VLKEB Lifelong Sequential | Reliability94.29 | 12 | |
| Multimodal Knowledge Editing | MMEdit 10-step sequential editing on VQA | Reliability67.8 | 12 | |
| Knowledge Editing | MMEdit One-Step Editing | Reliability64.3 | 7 | |
| Lifelong Knowledge Editing | E-IC Lifelong Sequential | Relational Score73.48 | 6 | |
| Cross-task Knowledge Editing | MMEdit cross-task | Rel. Score65 | 6 | |
| Image Caption Editing | MMEdit 10-step sequential editing | Relevance65.3 | 6 |