Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NEAT: Neuron-Based Early Exit for Large Reasoning Models

About

Large Reasoning Models (LRMs) often suffer from \emph{overthinking}, a phenomenon in which redundant reasoning steps are generated after a correct solution has already been reached. Existing early reasoning exit methods primarily rely on output-level heuristics or trained probing models to skip redundant reasoning steps, thereby mitigating overthinking. However, these approaches typically require additional rollout computation or externally labeled datasets. In this paper, we propose \textbf{NEAT}, a \textbf{N}euron-based \textbf{E}arly re\textbf{A}soning exi\textbf{T} framework that monitors neuron-level activation dynamics to enable training-free early exits, without introducing additional test-time computation. NEAT identifies exit-associated neurons and tracks their activation patterns during reasoning to dynamically trigger early exit or suppress reflection, thereby reducing unnecessary reasoning while preserving solution quality. Experiments on four reasoning benchmarks across six models with different scales and architectures show that, for each model, NEAT achieves an average token reduction of 22\% to 28\% when averaged over the four benchmarks, while maintaining accuracy.

Kang Liu, Yongkang Liu, Xiaocui Yang, Peidong Wang, Wen Zhang, Shi Feng, Yifei Zhang, Daling Wang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 24
Accuracy70
113
Mathematical ReasoningMATH 500
Accuracy95
28
Mathematical ReasoningAMC 23
Acc93.3
28
Science Question AnsweringGPQA D
Accuracy64.1
28
Showing 4 of 4 rows

Other info

Follow for update