Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SELFDOUBT: Uncertainty Quantification for Reasoning LLMs via the Hedge-to-Verify Ratio

About

Uncertainty estimation for reasoning language models remains difficult to deploy in practice: sampling-based methods are computationally expensive, while common single-pass proxies such as verbalized confidence or trace length are often inconsistent across models. This problem is compounded for proprietary reasoning APIs that expose neither logits nor intermediate token probabilities, leaving practitioners with no reliable uncertainty signal at inference time. We propose SELFDOUBT, a single-pass uncertainty framework that resolves this impasse by extracting behavioral signals directly from the reasoning trace itself. Our key signal, the Hedge-to-Verify Ratio (HVR), detects whether a reasoning trace contains uncertainty markers and, if so, whether they are offset by explicit selfchecking behavior. Unlike methods that require multiple sampled traces or model internals, SELFDOUBT operates on a single observed reasoning trajectory, making it suitable for latency- and cost-constrained deployment over any proprietary API. We evaluate SELFDOUBT across seven models and three multi-step reasoning benchmarks (BBH, GPQA-Diamond, and MMLU-Pro). Most notably, traces containing no hedging markers are correct 96% of the time, revealing an emergent high-precision confidence gate at zero additional cost. For the remaining cases, the full SELFDOUBT score significantly outperforms sampling-based semantic entropy at 10x lower inference cost. A deployment cascade combining both stages attains 90% accuracy at 71% coverage without any task-specific labels. These results establish SELFDOUBT as a scalable, production-ready foundation for uncertainty estimation over proprietary reasoning models.

Satwik Pandey, Suresh Raghu, Shashwat Pandey• 2026

Related benchmarks

TaskDatasetResultRank
Selective Prediction3 datasets (mean over all 21 runs)
AUROC0.7895
16
Selective Prediction3 datasets (Trace)
AUROC0.7984
8
Selective PredictionBBH, GPQA, and MMLU-Pro Pooled (test)--
8
Self-doubt detectionOriginal Source Datasets
Self-Doubt AUROC84.06
7
Self-doubt detectionMuSR 90-trace
AUROC (Self-doubt)83.66
7
Showing 5 of 5 rows

Other info

Follow for update