Mass-Editing Memory in a Transformer
About
Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up to thousands of associations for GPT-J (6B) and GPT-NeoX (20B), exceeding prior work by orders of magnitude. Our code and data are at https://memit.baulab.info.
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, David Bau• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multitask Language Understanding | MMLU (test) | Accuracy21.83 | 303 | |
| Knowledge Editing | CounterFact | Efficacy9.38e+3 | 301 | |
| Knowledge Editing | zsRE | Generality96.4 | 181 | |
| Lifelong Free-text Knowledge Editing | MRLF-Bench | BLEU36.36 | 140 | |
| Commonsense Question Answering | CommonsenseQA | Accuracy20.23 | 83 | |
| Model Editing | zsRE | Efficacy94.91 | 71 | |
| Sequential Model Editing | CounterFact | Efficacy98.55 | 61 | |
| Privacy Editing | TDE Email | Leakage0.00e+0 | 56 | |
| Sequential Model Editing | zsRE | Efficacy94.91 | 55 | |
| Model Editing | UltraEditBench | Efficacy0.82 | 51 |
Showing 10 of 149 rows
...