Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Training Language Models to Reason Efficiently

About

Scaling model size and training data has led to great advances in the performance of Large Language Models (LLMs). However, the diminishing returns of this approach necessitate alternative methods to improve model capabilities, particularly in tasks requiring advanced reasoning. Large reasoning models, which leverage long chain-of-thoughts, bring unprecedented breakthroughs in problem-solving capabilities but at a substantial deployment cost associated to longer generations. Reducing inference costs is crucial for the economic feasibility, user experience, and environmental sustainability of these models. In this work, we propose to train large reasoning models to reason efficiently. More precisely, we use reinforcement learning (RL) to train reasoning models to dynamically allocate inference-time compute based on task complexity. Our method incentivizes models to minimize unnecessary computational overhead while maintaining accuracy, thereby achieving substantial efficiency gains. It enables the derivation of a family of reasoning models with varying efficiency levels, controlled via a single hyperparameter. Experiments on two open-weight large reasoning models demonstrate significant reductions in inference cost while preserving most of the accuracy.

Daman Arora, Andrea Zanette• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Mathematical ReasoningMATH500 (test)
Accuracy95.8
514
Mathematical ReasoningGSM8K
Accuracy93.7
499
Mathematical ReasoningMATH
Accuracy90.4
338
Mathematical ReasoningAIME24
Accuracy37.1
160
Mathematical ReasoningAIME 24
Accuracy48.13
154
Math ReasoningGSM8K
Accuracy91.1
126
Mathematical ReasoningAMC 2023
Accuracy88.12
124
Mathematical ReasoningGSM8K
pass@191.1
102
Mathematical ReasoningAIME 2025
Pass@146.9
96
Showing 10 of 86 rows
...

Other info

Follow for update