Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Constraint-Rectified Training for Efficient Chain-of-Thought

About

Chain-of-Thought (CoT) has significantly enhanced the reasoning capabilities of Large Language Models (LLMs), especially when combined with reinforcement learning (RL) based post-training methods. While longer reasoning traces can improve answer quality and unlock abilities such as self-correction, they also incur high inference costs and often introduce redundant steps, known as overthinking. Recent research seeks to develop efficient reasoning strategies that balance reasoning length and accuracy, either through length-aware reward design or prompt-based calibration. However, these heuristic-based approaches may suffer from severe accuracy drop and be very sensitive to hyperparameters. To address these problems, we introduce CRT (Constraint-Rectified Training), a principled post-training framework based on reference-guarded constrained optimization, yielding a more stable and interpretable formulation for efficient reasoning. CRT alternates between minimizing reasoning length and rectifying accuracy only when performance falls below the reference, enabling stable and effective pruning of redundant reasoning. We further extend CRT with a two-stage training scheme that first discovers the shortest reliable reasoning patterns and then refines accuracy under a learnt length budget, preventing the re-emergence of verbose CoT. Our comprehensive evaluation shows that this framework consistently reduces token usage while maintaining answer quality at a robust and reliable level. Further analysis reveals that CRT improves reasoning efficiency not only by shortening responses but also by reducing internal language redundancy, leading to a new evaluation metric. Moreover, CRT-based training naturally yields a sequence of intermediate checkpoints that span a spectrum of explanation lengths while preserving correctness, enabling fine-grained control over reasoning verbosity without retraining.

Qinhang Wu, Sen Lin, Ming Zhang, Yingbin Liang, Ness B. Shroff• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500
Accuracy86.06
57
Mathematical ReasoningSAT Math
SAT Math Accuracy93.16
44
Mathematical ReasoningOlympiad Bench
Accuracy13.63
23
Mathematical ReasoningAMC 23
Accuracy74.38
9
Mathematical ReasoningOut-domain Aggregate SAT Math, AMC23, AIME24, OLYMPIAD Bench
Avg Acc (A_bar)52.43
9
Mathematical ReasoningGSM8K and MATH500 Aggregate
Avg Accuracy85.35
9
Mathematical ReasoningAIME 24
Accuracy28.54
9
Mathematical ReasoningSAT Math, AMC23, AIME24, and OLYMPIAD Bench Average (out-domain)
Average Accuracy62.99
7
Showing 8 of 8 rows

Other info

Follow for update