Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VeriThinker: Learning to Verify Makes Reasoning Model Efficient

About

Large Reasoning Models (LRMs) excel at complex tasks using Chain-of-Thought (CoT) reasoning. However, their tendency to overthinking leads to unnecessarily lengthy reasoning chains, dramatically increasing inference costs. To mitigate this issue, we introduce VeriThinker, a novel approach for CoT compression. Unlike conventional methods that fine-tune LRMs directly on the original reasoning task using synthetic concise CoT data, we innovatively fine-tune the model solely through an auxiliary verification task. By training LRMs to accurately verify the correctness of CoT solutions, the LRMs inherently become more discerning about the necessity of subsequent self-reflection steps, thereby effectively suppressing overthinking. Extensive experiments validate that VeriThinker substantially reduces reasoning chain lengths while maintaining or even slightly improving accuracy. When applied to DeepSeek-R1-Distill-Qwen-7B, our approach reduces reasoning tokens on MATH500 from 3790 to 2125 while improving accuracy by 0.8% (94.0% to 94.8%), and on AIME25, tokens decrease from 14321 to 10287 with a 2.1% accuracy gain (38.7% to 40.8%). Additionally, our experiments demonstrate that VeriThinker can also be zero-shot generalized to speculative reasoning. Code is available at https://github.com/czg1225/VeriThinker

Zigeng Chen, Xinyin Ma, Gongfan Fang, Ruonan Yu, Xinchao Wang• 2025

Related benchmarks

TaskDatasetResultRank
Multi-discipline Multimodal UnderstandingMMMU
Accuracy60.1
266
Visual Mathematical ReasoningMathVision
Accuracy23.1
63
Mathematical ReasoningMathVision
Accuracy29.1
38
Physical ReasoningPhyX
Accuracy45.5
16
Visual Logical ReasoningVisuLogic
Accuracy25.2
9
Visual Logical ReasoningVisuLogic
Accuracy27.5
8
Multimodal UnderstandingMMMU
Accuracy52.2
8
Showing 7 of 7 rows

Other info

Follow for update