Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Not All Tokens Matter: Towards Efficient LLM Reasoning via Token Significance in Reinforcement Learning

About

Large language models (LLMs) show strong reasoning abilities but often produce unnecessarily long explanations that reduce efficiency. Although reinforcement learning (RL) has been used to improve reasoning, most methods focus on accuracy and rely on uniform length-based rewards that overlook the differing contributions of individual tokens, often harming correctness. We revisit length optimization in RL through the perspective of token significance. Observing that many chain-of-thought (CoT) tokens contribute little to the final answer, we introduce a significance-aware length reward that selectively penalizes insignificance tokens, reducing redundancy while preserving essential reasoning. We also propose a dynamic length reward that encourages more detailed reasoning early in training and gradually shifts toward conciseness as learning progresses. Integrating these components into standard policy optimization yields a framework that improves both reasoning efficiency and accuracy. Experiments across multiple benchmarks demonstrate substantial reductions in response length while preserving or improving correctness, highlighting the importance of modeling token significance for efficient LLM reasoning.

Hanbing Liu, Lang Cao, Yuanyi Ren, Mengyu Zhou, Haoyu Dong, Xiaojun Ma, Shi Han, Dongmei Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500
Accuracy88.8
12
Mathematical ReasoningAIME 2024
Acc63.3
12
Mathematical ReasoningGSM8K
Accuracy96.1
12
STEM ReasoningTheoremQA
Accuracy36.8
8
Mathematical ReasoningMATH 500
Accuracy82.2
8
Mathematical ReasoningAIME 2024
Accuracy33.3
8
Showing 6 of 6 rows

Other info

Follow for update