Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ShorterBetter: Guiding Reasoning Models to Find Optimal Inference Length for Efficient Reasoning

About

Recent models such as OpenAI o1 and DeepSeek-R1 have demonstrated strong performance on reasoning-intensive tasks by generating extended Chain-of-Thought (CoT) traces. While longer reasoning helps with thorough exploration of solution paths for complex problems, it also often leads to inefficient and redundant outputs--a phenomenon commonly described as overthinking. In this paper, we propose ShorterBetter, a simple yet effective reinforcement learning method that enables reasoning models to learn their own optimal CoT lengths without manual supervision. We define the Sample Optimal Length (SOL) as the length of the shortest correct response among multiple generations, which serves as a dynamic reward signal to guide the model toward efficient reasoning. Applied to DeepSeek-Distill-Qwen-1.5B/7B as base models, ShorterBetter achieves 50%-80% reduction in output lengths in both in-domain and out-of-domain reasoning tasks while maintaining accuracy. Our reasoning trace analysis shows that ShorterBetter refines the structure of the reasoning traces by reducing unnecessary repetition, excessive self-verification, and over-exploration of alternatives.

Jingyang Yi, Jiazheng Wang, Sida Li• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy79.33
351
Mathematical ReasoningMATH500
Accuracy82.47
57
Mathematical ReasoningSAT Math
SAT Math Accuracy84.77
44
Mathematical ReasoningOlympiad Bench
Accuracy11.62
23
Mathematical ReasoningAMC23
Accuracy66.67
18
Mathematical ReasoningAIME 24
Accuracy37.78
18
Mathematical ReasoningMath Benchmarks Aggregate
Accuracy (Avg)64.03
18
Mathematical ReasoningMATH
Accuracy72.33
18
Medical Question AnsweringMedical Benchmarks (MedQA, MedMCQA, BULLET) (test)
MedQA Accuracy0.3717
18
Mathematical ReasoningAMC 23
Accuracy74.84
9
Showing 10 of 14 rows

Other info

Follow for update