Neuron Empirical Gradient: Discovering and Quantifying Neurons Global Linear Controllability
About
While feed-forward neurons in pre-trained language models (PLMs) can encode knowledge, past research targeted a small subset of neurons that heavily influence outputs. This leaves the broader role of neuron activations unclear, limiting progress in areas like knowledge editing. We uncover a global linear relationship between neuron activations and outputs using neuron interventions on a knowledge probing dataset. The gradient of this linear relationship, which we call the neuron empirical gradient (NEG), captures how changes in activations affect predictions. To compute NEG efficiently, we propose NeurGrad, enabling large-scale analysis of neuron behavior in PLMs. We also show that NEG effectively captures language skills across diverse prompts through skill neuron probing. Experiments on MCEval8k, a multi-genre multiple-choice knowledge benchmark, support NEG's ability to represent model knowledge. Further analysis highlights the key properties of NEG-based skill representation: efficiency, robustness, flexibility, and interdependency. The code and data are released.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Question Answering | MCEval CSQA 8K (test) | Accuracy84.6 | 14 | |
| Factual Knowledge Retrieval | MCEval mLAMA 8K (test) | Accuracy78.5 | 14 | |
| Named Entity Recognition | MCEval NER 8K (test) | Accuracy0.877 | 14 | |
| Paraphrase Identification | MCEval PAWS 8K (test) | Accuracy87.3 | 14 | |
| Hallucination Evaluation | MCEval HaluEval 8K (test) | Accuracy79.8 | 14 | |
| Topic Classification | MCEval Agnews 8K (test) | Accuracy82.4 | 14 |