Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Language Models to Reason Efficiently

About

Scaling model size and training data has led to great advances in the performance of Large Language Models (LLMs). However, the diminishing returns of this approach necessitate alternative methods to improve model capabilities, particularly in tasks requiring advanced reasoning. Large reasoning models, which leverage long chain-of-thoughts, bring unprecedented breakthroughs in problem-solving capabilities but at a substantial deployment cost associated to longer generations. Reducing inference costs is crucial for the economic feasibility, user experience, and environmental sustainability of these models. In this work, we propose to train large reasoning models to reason efficiently. More precisely, we use reinforcement learning (RL) to train reasoning models to dynamically allocate inference-time compute based on task complexity. Our method incentivizes models to minimize unnecessary computational overhead while maintaining accuracy, thereby achieving substantial efficiency gains. It enables the derivation of a family of reasoning models with varying efficiency levels, controlled via a single hyperparameter. Experiments on two open-weight large reasoning models demonstrate significant reductions in inference cost while preserving most of the accuracy.

Daman Arora, Andrea Zanette• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy91.67
351
Math ReasoningGSM8K
Accuracy91.1
126
Mathematical ReasoningGSM8K
pass@191.1
102
Mathematical ReasoningAIME 2025
Pass@146.9
96
Mathematical ReasoningAIME 2024
Pass@153.8
86
Mathematical ReasoningMinerva Math
pass@1 Accuracy39.5
82
Mathematical ReasoningMATH 500
Accuracy91.2
73
Mathematical ReasoningAMC 2023
Accuracy88.12
65
Mathematical ReasoningAIME 2024
Pass@151.9
54
Code ReasoningLiveCodeBench
Accuracy53.5
46
Showing 10 of 39 rows

Other info

Follow for update