Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Selective Neuron Amplification for Training-Free Task Enhancement

About

Large language models often fail on tasks they seem to already understand. In our experiments, this appears to be less about missing knowledge and more about certain internal circuits not being strongly activated during inference. We explore Selective Neuron Amplification, which increases the influence of task relevant neurons without changing the model's parameters. The method works at inference time and does not permanently alter the model. SNA helps mainly when the model is uncertain, while having low effect when the model is already confident. This suggests that some model failures are due to weak activation rather than lack of capability.

Ryyan Akhtar• 2026

Related benchmarks

TaskDatasetResultRank
CodingCoding Easy--
1
CodingCoding Medium--
1
CodingCoding Hard--
1
LogicLogic Easy--
1
LogicLogic Medium--
1
LogicLogic Hard--
1
MathMATH Easy--
1
MathMath Medium--
1
MathMATH Hard--
1
PoetryPoetry Easy--
1
Showing 10 of 12 rows

Other info

Follow for update