Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Accordion-Thinking: Self-Regulated Step Summaries for Efficient and Readable LLM Reasoning

About

Scaling test-time compute via long Chain-ofThought unlocks remarkable gains in reasoning capabilities, yet it faces practical limits due to the linear growth of KV cache and quadratic attention complexity. In this paper, we introduce Accordion-Thinking, an end-to-end framework where LLMs learn to self-regulate the granularity of the reasoning steps through dynamic summarization. This mechanism enables a Fold inference mode, where the model periodically summarizes its thought process and discards former thoughts to reduce dependency on historical tokens. We apply reinforcement learning to incentivize this capability further, uncovering a critical insight: the accuracy gap between the highly efficient Fold mode and the exhaustive Unfold mode progressively narrows and eventually vanishes over the course of training. This phenomenon demonstrates that the model learns to encode essential reasoning information into compact summaries, achieving effective compression of the reasoning context. Our Accordion-Thinker demonstrates that with learned self-compression, LLMs can tackle complex reasoning tasks with minimal dependency token overhead without compromising solution quality, and it achieves a 3x throughput while maintaining accuracy on a 48GB GPU memory configuration, while the structured step summaries provide a human-readable account of the reasoning process.

Zhicheng Yang, Zhijiang Guo, Yinya Huang, Yongxin Wang, Wenlei Shi, Yiwei Wang, Xiaodan Liang, Jing Tang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMinerva--
138
Mathematical ReasoningAIME24
Pass@1 Accuracy32.2
14
Mathematical ReasoningAIME25
Pass@1 (Avg@32)28.3
14
Mathematical ReasoningMATH500
Pass@1 (Avg@32)89.9
14
Mathematical ReasoningAMC
Pass@1 (Avg@32)73.8
14
Mathematical ReasoningMacro Average Selected Benchmarks
Pass@1 (Avg@32)52.8
14
Inference ThroughputAIME24/25
Throughput (token/s)5.89e+3
6
Showing 7 of 7 rows

Other info

Follow for update