Can Pruning Improve Reasoning? Revisiting Long-CoT Compression with Capability in Mind for Better Reasoning
About
Long chain-of-thought (Long-CoT) reasoning improves accuracy in LLMs, yet its verbose, self-reflective style often hinders effective distillation into small language models (SLMs). We revisit Long-CoT compression through the lens of capability alignment and ask: Can pruning improve reasoning? We propose Prune-on-Logic, a structure-aware framework that transforms Long-CoT into logic graphs and selectively prunes low-utility reasoning steps under self-verification constraints. Through systematic analysis across three pruning strategies targeting entire chains, core reasoning, and verification, we find that verification pruning consistently improves accuracy while reducing token usage, whereas pruning reasoning steps or indiscriminate pruning degrades performance. Our study reveals that effective pruning aligns supervision with model capacity rather than merely shortening inputs. Gains hold across tasks, model scales, and CoT capability, with larger models benefiting more from pruning due to richer but more redundant reasoning. Our empirical findings highlight pruning as a structural optimization strategy for aligning CoT reasoning with SLM capacity.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Reasoning | WeMath | Accuracy63.4 | 43 | |
| Multimodal Reasoning | MMStar | Accuracy57.7 | 29 | |
| Multimodal Reasoning | MathVista | Accuracy45.9 | 29 | |
| Multimodal Reasoning | R1-Onevision-Bench (Overall) | Accuracy34.1 | 23 | |
| Multimodal Reasoning | MMMU | Accuracy55.7 | 8 | |
| Multimodal Reasoning | R1-Onevision-Bench Math | Accuracy25.4 | 8 | |
| Multimodal Reasoning | R1-Onevision-Bench Physics | Accuracy34.4 | 8 | |
| Multimodal Reasoning | R1-Onevision-Bench Deduction | Accuracy27.3 | 8 | |
| Visual Information Preservation and Explainability Evaluation | Multimodal Reasoning Benchmarks (MathVista, WeMath, MMStar, MMMU, R1-Onevision-Bench) (test) | Visual Info Preservation Score3.51 | 4 |