Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TokenSkip: Controllable Chain-of-Thought Compression in LLMs

About

Chain-of-Thought (CoT) has been proven effective in enhancing the reasoning capabilities of large language models (LLMs). Recent advancements, such as OpenAI's o1 and DeepSeek-R1, suggest that scaling up the length of CoT sequences during inference could further boost LLM reasoning performance. However, due to the autoregressive nature of LLM decoding, longer CoT outputs lead to a linear increase in inference latency, adversely affecting user experience, particularly when the CoT exceeds 10,000 tokens. To address this limitation, we analyze the semantic importance of tokens within CoT outputs and reveal that their contributions to reasoning vary. Building on this insight, we propose TokenSkip, a simple yet effective approach that enables LLMs to selectively skip less important tokens, allowing for controllable CoT compression. Extensive experiments across various models and tasks demonstrate the effectiveness of TokenSkip in reducing CoT token usage while preserving strong reasoning performance. Notably, when applied to Qwen2.5-14B-Instruct, TokenSkip reduces reasoning tokens by 40% (from 313 to 181) on GSM8K, with less than a 0.4% performance drop. We release our code and checkpoints in https://github.com/hemingkx/TokenSkip.

Heming Xia, Chak Tou Leong, Wenjie Wang, Yongqi Li, Wenjie Li• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy80.6
381
Mathematical ReasoningSVAMP
Accuracy90.3
368
Mathematical ReasoningGSM8K
Accuracy90.83
351
Mathematical ReasoningMultiArith
Accuracy99.4
116
General ReasoningStratQA
Accuracy77.9
91
Math ReasoningMATH
Accuracy44.5
88
Mathematical ReasoningGSM8K (test)
Accuracy94.39
39
Code ReasoningHumanE
Accuracy75.8
35
Mathematical ReasoningGSM8K
Accuracy0.884
30
Mathematical ReasoningMATH 500
Accuracy54.8
30
Showing 10 of 36 rows

Other info

Follow for update