Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling

About

Calibration is central to reliable semantic uncertainty quantification, yet prior work has largely focused on discrimination, neglecting calibration. As calibration and discrimination capture distinct aspects of uncertainty, focusing on discrimination alone yields an incomplete picture. We address this gap by systematically evaluating both aspects across a broad set of confidence measures. We show that current approaches, particularly fixed-temperature heuristics, produce systematically miscalibrated and poorly discriminative semantic confidence distributions. We demonstrate that optimising a single scalar temperature, which, we argue, provides a suitable inductive bias, is a surprisingly simple yet effective solution. Our exhaustive evaluation confirms that temperature scaling consistently improves semantic calibration, discrimination, and downstream entropy, outperforming both heuristic baselines and more expressive token-level recalibration methods on question-answering tasks.

Tom A. Lamb, Desi R. Ivanova, Philip H. S. Torr, Tim G. J. Rudner• 2026

Related benchmarks

TaskDatasetResultRank
Uncertainty EstimationTriviaQA (test)
AUROC85.7
104
Question AnsweringNQ
ACE Score0.496
70
Question AnsweringSQuAD
ACE (General)0.112
70
Question AnsweringTriviaQA
ACE0.32
35
Question AnsweringTriviaQA
ACE20
35
Semantic Uncertainty QuantificationNQ (test)
AUROC0.758
20
Semantic Uncertainty QuantificationSQuAD (test)
AUROC74.8
20
Closed-book Generative Question AnsweringTriviaQA
E-SC Score0.197
5
Closed-book Generative Question AnsweringNQ
E-SC0.358
5
Closed-book Generative Question AnsweringSQuAD
E-SC6.7
5
Showing 10 of 13 rows

Other info

Follow for update