Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL

About

Large reasoning models (LRMs) are proficient at generating explicit, step-by-step reasoning sequences before producing final answers. However, such detailed reasoning can introduce substantial computational overhead and latency, particularly for simple problems. To address this over-thinking problem, we explore how to equip LRMs with adaptive thinking capabilities: enabling them to dynamically decide whether or not to engage in explicit reasoning based on problem complexity. Building on R1-style distilled models, we observe that inserting a simple ellipsis ("...") into the prompt can stochastically trigger either a thinking or no-thinking mode, revealing a latent controllability in the reasoning behavior. Leveraging this property, we propose AutoThink, a multi-stage reinforcement learning (RL) framework that progressively optimizes reasoning policies via stage-wise reward shaping. AutoThink learns to invoke explicit reasoning only when necessary, while defaulting to succinct responses for simpler tasks. Experiments on five mainstream mathematical benchmarks demonstrate that AutoThink achieves favorable accuracy-efficiency trade-offs compared to recent prompting and RL-based pruning methods. It can be seamlessly integrated into any R1-style model, including both distilled and further fine-tuned variants. Notably, AutoThink improves relative accuracy by 6.4 percent while reducing token usage by 52 percent on DeepSeek-R1-Distill-Qwen-1.5B, establishing a scalable and adaptive reasoning paradigm for LRMs. Project Page: https://github.com/ScienceOne-AI/AutoThink.

Songjun Tu, Jiahao Lin, Qichao Zhang, Xiangyu Tian, Linjing Li, Xiangyuan Lan, Dongbin Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy92.83
351
Math ReasoningGSM8K
Accuracy91.1
126
Mathematical ReasoningMATH500
Accuracy83.8
57
Math ReasoningMATH 500
Accuracy91.2
38
Mathematical ReasoningAIME 2025
Accuracy23.8
38
Math ReasoningAIME 2024
Accuracy0.548
37
Mathematical ReasoningAIME 2024
Accuracy31.7
33
Math ReasoningAIME 2025
Accuracy36.2
33
Math ReasoningAMC 2023
Accuracy83.3
26
Math ReasoningOlympiadBench
Accuracy65.5
22
Showing 10 of 16 rows

Other info

Follow for update