Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SyncThink: A Training-Free Strategy to Align Inference Termination with Reasoning Saturation

About

Chain-of-Thought (CoT) prompting improves reasoning but often produces long and redundant traces that substantially increase inference cost. We present SyncThink, a training-free and plug-and-play decoding method that reduces CoT overhead without modifying model weights. We find that answer tokens attend weakly to early reasoning and instead focus on the special token "/think", indicating an information bottleneck. Building on this observation, SyncThink monitors the model's own reasoning-transition signal and terminates reasoning. Experiments on GSM8K, MMLU, GPQA, and BBH across three DeepSeek-R1 distilled models show that SyncThink achieves 62.00 percent average Top-1 accuracy using 656 generated tokens and 28.68 s latency, compared to 61.22 percent, 2141 tokens, and 92.01 s for full CoT decoding. On long-horizon tasks such as GPQA, SyncThink can further yield up to +8.1 absolute accuracy by preventing over-thinking.

Gengyang Li, Wang Cai, Yifeng Gao, Yunfang Wu• 2026

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU (test)
Normalized Accuracy77.99
76
Logical reasoningBBH (test)
Top@1 Accuracy83.84
27
Mathematical ReasoningGSM8K (test)
Top-1 Accuracy93.03
24
Showing 3 of 3 rows

Other info

Follow for update