Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Thinking by Subtraction: Confidence-Driven Contrastive Decoding for LLM Reasoning

About

Recent work on test-time scaling for large language model (LLM) reasoning typically assumes that allocating more inference-time computation uniformly improves correctness. However, prior studies show that reasoning uncertainty is highly localized: a small subset of low-confidence tokens disproportionately contributes to reasoning errors and unnecessary output expansion. Motivated by this observation, we propose Thinking by Subtraction, a confidence-driven contrastive decoding approach that improves reasoning reliability through targeted token-level intervention. Our method, Confidence-Driven Contrastive Decoding, detects low-confidence tokens during decoding and intervenes selectively at these positions. It constructs a contrastive reference by replacing high-confidence tokens with minimal placeholders, and refines predictions by subtracting this reference distribution at low-confidence locations. Experiments show that CCD significantly improves accuracy across mathematical reasoning benchmarks while substantially reducing output length, with minimal KV-cache overhead. As a training-free method, CCD enhances reasoning reliability through targeted low-confidence intervention without computational redundancy. Our code will be made available at: https://github.com/bolo-web/CCD.

Lexiang Tang, Weihao Gao, Bingchen Zhao, Lu Ma, Qiao jin, Bang Yang, Yuexian Zou• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningBRUMO25--
37
Mathematical ReasoningHMMT25
Avg@8 Score50.83
20
Mathematical ReasoningAIME 24
Mean@8 Accuracy81.67
9
Mathematical ReasoningAIME 25
Mean@8 Accuracy73.75
9
Showing 4 of 4 rows

Other info

Follow for update