Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neuron Empirical Gradient: Discovering and Quantifying Neurons Global Linear Controllability

About

While feed-forward neurons in pre-trained language models (PLMs) can encode knowledge, past research targeted a small subset of neurons that heavily influence outputs. This leaves the broader role of neuron activations unclear, limiting progress in areas like knowledge editing. We uncover a global linear relationship between neuron activations and outputs using neuron interventions on a knowledge probing dataset. The gradient of this linear relationship, which we call the neuron empirical gradient (NEG), captures how changes in activations affect predictions. To compute NEG efficiently, we propose NeurGrad, enabling large-scale analysis of neuron behavior in PLMs. We also show that NEG effectively captures language skills across diverse prompts through skill neuron probing. Experiments on MCEval8k, a multi-genre multiple-choice knowledge benchmark, support NEG's ability to represent model knowledge. Further analysis highlights the key properties of NEG-based skill representation: efficiency, robustness, flexibility, and interdependency. The code and data are released.

Xin Zhao, Zehui Jiang, Naoki Yoshinaga• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense Question AnsweringMCEval CSQA 8K (test)
Accuracy84.6
14
Factual Knowledge RetrievalMCEval mLAMA 8K (test)
Accuracy78.5
14
Named Entity RecognitionMCEval NER 8K (test)
Accuracy0.877
14
Paraphrase IdentificationMCEval PAWS 8K (test)
Accuracy87.3
14
Hallucination EvaluationMCEval HaluEval 8K (test)
Accuracy79.8
14
Topic ClassificationMCEval Agnews 8K (test)
Accuracy82.4
14
Showing 6 of 6 rows

Other info

Code

Follow for update