Selective Neuron Amplification for Training-Free Task Enhancement
About
Large language models often fail on tasks they seem to already understand. In our experiments, this appears to be less about missing knowledge and more about certain internal circuits not being strongly activated during inference. We explore Selective Neuron Amplification, which increases the influence of task relevant neurons without changing the model's parameters. The method works at inference time and does not permanently alter the model. SNA helps mainly when the model is uncertain, while having low effect when the model is already confident. This suggests that some model failures are due to weak activation rather than lack of capability.
Ryyan Akhtar• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Coding | Coding Easy | -- | 1 | |
| Coding | Coding Medium | -- | 1 | |
| Coding | Coding Hard | -- | 1 | |
| Logic | Logic Easy | -- | 1 | |
| Logic | Logic Medium | -- | 1 | |
| Logic | Logic Hard | -- | 1 | |
| Math | MATH Easy | -- | 1 | |
| Math | Math Medium | -- | 1 | |
| Math | MATH Hard | -- | 1 | |
| Poetry | Poetry Easy | -- | 1 |
Showing 10 of 12 rows