Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Neurons in Pretrained Transformers

About

Large-scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus. In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons. Specifically, we examine the fill-in-the-blank cloze task for BERT. Given a relational fact, we propose a knowledge attribution method to identify the neurons that express the fact. We find that the activation of such knowledge neurons is positively correlated to the expression of their corresponding facts. In our case studies, we attempt to leverage knowledge neurons to edit (such as update, and erase) specific factual knowledge without fine-tuning. Our results shed light on understanding the storage of knowledge within pretrained Transformers. The code is available at https://github.com/Hunter-DDM/knowledge-neurons.

Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, Furu Wei• 2021

Related benchmarks

TaskDatasetResultRank
Knowledge EditingzsRE
Generality0.00e+0
110
ChunkingChunking
RAC47.2
34
Model EditingCounterFact
Reliability12.3
30
Model EditingRIPE
Reliability21.8
30
Commonsense ReasoningCommonsense
RCC34.3
29
Sentiment AnalysisSentiment
RAC16.1
29
Model EditingzsRE
Reliability0.202
16
Sequential Model EditingZSRE (test)
Reliability1
14
Model EditingCOUNTERFACT 7,500-record GPT-2 XL (test)
Score35.6
9
Lifelong Knowledge EditingLifelong Editing on GPT-2 XL 1024 edits (test)
Score (S)0.00e+0
6
Showing 10 of 10 rows

Other info

Code

Follow for update