Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Unified Framework for Model Editing

About

ROME and MEMIT are largely believed to be two different model editing algorithms, with the major difference between them being the ability to perform batched edits. In this paper, we unify these two algorithms under a single conceptual umbrella, optimizing for the same goal, which we call the preservation-memorization objective. ROME uses an equality constraint to optimize this objective to perform one edit at a time, whereas MEMIT employs a more flexible least-square constraint that allows for batched edits. We generalize ROME and enable batched editing with equality constraint in the form of EMMET - an Equality-constrained Mass Model Editing algorithm for Transformers, a new batched memory-editing algorithm. EMMET can perform batched-edits up to a batch-size of 10,000, with very similar performance to MEMIT across multiple dimensions. With the introduction of EMMET, we truly unify ROME and MEMIT and show that both algorithms are equivalent in terms of their optimization objective, their abilities (singular and batched editing), their model editing performance and their limitations.

Akshat Gupta, Dev Sajnani, Gopala Anumanchipalli• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge EditingzsRE
Generality31.2
110
Machine UnlearningRWKU Llama 3.1 8B (Forget Set)
FB Score36.1
39
Knowledge EditingCounterfact 10,000 facts
Relational Score8.55e+3
27
Knowledge EditingZsRE 10,000 facts
Reliability70.37
27
Model EditingCounterFact
Efficacy86.37
24
Model EditingzsRE
Efficacy76.35
24
Knowledge EditingCHED
S93.5
16
Text FluencyCHED and CounterFact
Average Score7.4
16
General Language UnderstandingGeneral Ability Suite (C-QA, T-QA, LAM, MMLU, L-Code)
Average Score14.7
16
UnlearningTOFU
Probability0.4916
10
Showing 10 of 10 rows

Other info

Follow for update