Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Reliable Truth-Aligned Uncertainty Estimation in Large Language Models

About

Uncertainty estimation (UE) aims to detect hallucinated outputs of large language models (LLMs) to improve their reliability. However, UE metrics often exhibit unstable performance across configurations, which significantly limits their applicability. In this work, we formalise this phenomenon as proxy failure, since most UE metrics originate from model behaviour, rather than being explicitly grounded in the factual correctness of LLM outputs. With this, we show that UE metrics become non-discriminative precisely in low-information regimes. To alleviate this, we propose Truth AnChoring (TAC), a post-hoc calibration method to remedy UE metrics, by mapping the raw scores to truth-aligned scores. Even with noisy and few-shot supervision, our TAC can support the learning of well-calibrated uncertainty estimates, and presents a practical calibration protocol. Our findings highlight the limitations of treating heuristic UE metrics as direct indicators of truth uncertainty, and position our TAC as a necessary step toward more reliable uncertainty estimation for LLMs. The code repository is available at https://github.com/ponhvoan/TruthAnchor/.

Ponhvoan Srey, Quang Minh Nguyen, Xiaobao Wu, Anh Tuan Luu• 2026

Related benchmarks

TaskDatasetResultRank
Uncertainty EstimationTriviaQA
AUROC85.56
77
Uncertainty EstimationSciQA
AUROC0.8269
56
Showing 2 of 2 rows

Other info

Follow for update