Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reasoning or Fluency? Dissecting Probabilistic Confidence in Best-of-N Selection

About

Probabilistic confidence metrics are increasingly adopted as proxies for reasoning quality in Best-of-N selection, under the assumption that higher confidence reflects higher reasoning fidelity. In this work, we challenge this assumption by investigating whether these metrics truly capture inter-step causal dependencies necessary for valid reasoning. We introduce three classes of inter-step causality perturbations that systematically disrupt dependencies between reasoning steps while preserving local fluency. Surprisingly, across diverse model families and reasoning benchmarks, we find that selection accuracy degrades only marginally under these disruptions. Even severe interventions, such as applying hard attention masks that directly prevent the model from attending to prior reasoning steps, do not substantially reduce selection performance. These findings provide strong evidence that current probabilistic metrics are largely insensitive to logical structure, and primarily capture surface-level fluency or in-distribution priors instead. Motivated by this gap, we propose a contrastive causality metric that explicitly isolates inter-step causal dependencies, and demonstrate that it yields more faithful output selection than existing probability-based approaches.

Hojin Kim, Jaehyung Kim• 2026

Related benchmarks

TaskDatasetResultRank
Scientific ReasoningGPQA Diamond--
28
Logical reasoningLogiQA
Selection Accuracy43.57
6
Mathematical ReasoningMATH 500
Selection Accuracy63.87
6
Analytical ReasoningAR-LSAT
Selection Accuracy22.2
6
Mathematical ReasoningGSM8K
Selection Accuracy32.97
6
Showing 5 of 5 rows

Other info

Follow for update