Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A State-Transition Framework for Efficient LLM Reasoning

About

While Long Chain-of-Thought (CoT) reasoning significantly improves Large Language Models (LLMs) performance on complex reasoning tasks, the substantial computational and memory costs of generating long CoT sequences limit their efficiency and practicality. Existing studies usually enhance the reasoning efficiency of LLMs by compressing CoT sequences. However, this approach conflicts with test-time scaling, limiting the reasoning capacity of LLMs. In this paper, we propose an efficient reasoning framework that models the reasoning process of LLMs as a state-transition process. Specifically, we first apply a linear attention mechanism to estimate the LLM's reasoning state, which records the historical reasoning information from previous reasoning steps. Then, based on the query prompt and the reasoning state, the LLM can efficiently perform the current reasoning step and update the state. With the linear attention, each token in the current reasoning step can directly retrieve relevant historical reasoning information from the reasoning state, without explicitly attending to tokens in previous reasoning steps. In this way, the computational complexity of attention is reduced from quadratic to linear, significantly improving the reasoning efficiency of LLMs. In addition, we propose a state-based reasoning strategy to mitigate the over-thinking issue caused by noisy reasoning steps. Extensive experiments across multiple datasets and model sizes demonstrate that our framework not only improves the reasoning efficiency of LLMs but also enhances their reasoning performance.

Liang Zhang, Yu Zhao, Longyue Wang, Tianqi Shi, Weihua Luo, Kaifu Zhang, Jinsong Su• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME24
Accuracy43.3
130
Mathematical ReasoningAIME 24
Accuracy43.3
113
Mathematical ReasoningAIME 25
Accuracy36.7
14
Mathematical ReasoningAverage (GSM8K, MATH-500, AMC23, AIME24, AIME25)
Accuracy69.9
14
Mathematical ReasoningGSM8K
Accuracy90.9
12
Mathematical ReasoningMATH 500
Accuracy90
12
Mathematical ReasoningAMC 23
Accuracy85
12
Mathematical ReasoningGSM8K
Accuracy91.6
2
Mathematical ReasoningMATH 500
Accuracy90.4
2
Showing 9 of 9 rows

Other info

Follow for update