Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer-Patcher: One Mistake worth One Neuron

About

Large Transformer-based Pretrained Language Models (PLMs) dominate almost all Natural Language Processing (NLP) tasks. Nevertheless, they still make mistakes from time to time. For a model deployed in an industrial environment, fixing these mistakes quickly and robustly is vital to improve user experiences. Previous works formalize such problems as Model Editing (ME) and mostly focus on fixing one mistake. However, the one-mistake-fixing scenario is not an accurate abstraction of the real-world challenge. In the deployment of AI services, there are ever-emerging mistakes, and the same mistake may recur if not corrected in time. Thus a preferable solution is to rectify the mistakes as soon as they appear nonstop. Therefore, we extend the existing ME into Sequential Model Editing (SME) to help develop more practical editing methods. Our study shows that most current ME methods could yield unsatisfying results in this scenario. We then introduce Transformer-Patcher, a novel model editor that can shift the behavior of transformer-based models by simply adding and training a few neurons in the last Feed-Forward Network layer. Experimental results on both classification and generation tasks show that Transformer-Patcher can successively correct up to thousands of errors (Reliability) and generalize to their equivalent inputs (Generality) while retaining the model's accuracy on irrelevant inputs (Locality). Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME). The code is available at https://github.com/ZeroYuHuang/Transformer-Patcher.

Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, Zhang Xiong• 2023

Related benchmarks

TaskDatasetResultRank
Knowledge EditingzsRE
Generality44.1
110
Lifelong Knowledge EditingE-VQA Lifelong Sequential
Rel. Score70.14
72
Knowledge EditingMMEdit E-VQA
Reliability85.6
61
Knowledge EditingVLKEB
Reliability50.98
45
Model EditingCounterFact
Reliability91.4
30
Model EditingRIPE
Reliability77
30
Knowledge EditingMMEdit E-IC 1.0 (test)
Reliability72.78
24
Knowledge EditingE-VQA MMEdit 1.0 (test)
Reliability80.35
24
Model EditingzsRE
Reliability0.864
16
Knowledge EditingMMEdit E-IC
Reliability85.6
16
Showing 10 of 19 rows

Other info

Follow for update