Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Wait, We Don't Need to "Wait"! Removing Thinking Tokens Improves Reasoning Efficiency

About

Recent advances in large reasoning models have enabled complex, step-by-step reasoning but often introduce significant overthinking, resulting in verbose and redundant outputs that hinder efficiency. In this study, we examine whether explicit self-reflection, signaled by tokens such as "Wait" and "Hmm", is necessary for advanced reasoning. We propose NoWait, a simple yet effective approach that disables explicit self-reflection by suppressing these tokens during inference. Extensive experiments on ten benchmarks across textual, visual, and video reasoning tasks show that NoWait reduces chain-of-thought trajectory length by up to 27%-51% in five R1-style model series, without compromising model utility. NoWait thus offers a plug-and-play solution for efficient and utility-preserving multimodal reasoning.

Chenlong Wang, Yuanning Feng, Dongping Chen, Zhaoyang Chu, Ranjay Krishna, Tianyi Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Math ReasoningAMC23
Pass@1 Accuracy95
68
Mathematical ReasoningMATH500
Accuracy88.7
57
General ReasoningOverall
Accuracy83.3
40
Math ReasoningGSM8K
Pass@1 Accuracy96.3
36
Math ReasoningAIME 24
Pass@1 Score66.7
36
Math ReasoningMATH 500
Pass@193.8
36
Math ReasoningAIME 25
Pass@163.3
33
Math ReasoningOlympiad
Pass@162.6
30
Question AnsweringGPQA
Accuracy38.9
22
Mathematical ReasoningAMC
Accuracy97.5
15
Showing 10 of 21 rows

Other info

Follow for update