Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Training Elicits Concise Reasoning in Large Language Models

About

Chain-of-thought (CoT) reasoning has enabled large language models (LLMs) to utilize additional computation through intermediate tokens to solve complex tasks. However, we posit that typical reasoning traces contain many redundant tokens, incurring extraneous inference costs. Upon examination of the output distribution of current LLMs, we find evidence on their latent ability to reason more concisely, relative to their default behavior. To elicit this capability, we propose simple fine-tuning methods which leverage self-generated concise reasoning paths obtained by best-of-N sampling and few-shot conditioning, in task-specific settings. Our combined method achieves a 30% reduction in output tokens on average, across five model families on GSM8K and MATH, while maintaining average accuracy. By exploiting the fundamental stochasticity and in-context learning capabilities of LLMs, our self-training approach robustly elicits concise reasoning on a wide range of models, including those with extensive post-training. Code is available at https://github.com/TergelMunkhbat/concise-reasoning

Tergel Munkhbat, Namgyu Ho, Seo Hyun Kim, Yongjin Yang, Yujin Kim, Se-Young Yun• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy90
381
Mathematical ReasoningAMC23 (test)
Pass@185.9
36
Mathematical ReasoningAMC 23
Accuracy88.5
24
Mathematical ReasoningMATH 500
Accuracy91.3
24
Mathematical ReasoningAIME 2024
Accuracy50
24
Mathematical ReasoningGSM8K
Accuracy91.6
24
Mathematical ReasoningAIME25
Accuracy36.6
24
Mathematical ReasoningMath Benchmarks Overall (test)
Pass@184.8
12
Mathematical ReasoningGSM8K (test)
ACC94.24
12
Mathematical ReasoningGSM8K (test)
Pass@194.9
12
Showing 10 of 13 rows

Other info

Follow for update