Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evaluating and Calibrating LLM Confidence on Questions with Multiple Correct Answers

About

Confidence calibration is essential for making large language models (LLMs) reliable, yet existing training-free methods have been primarily studied under single-answer question answering. In this paper, we show that these methods break down in the presence of multiple valid answers, where disagreement among equally correct responses leads to systematic underestimation of confidence. To enable a systematic study of this phenomenon, we introduce MACE, a benchmark of 12,000 factual questions spanning six domains with varying numbers of correct answers. Experiments across 15 representative calibration methods and four LLM families (7B-72B) reveal that while accuracy increases with answer cardinality, estimated confidence consistently decreases, causing severe miscalibration for questions with mixed answer counts. To address this issue, we propose Semantic Confidence Aggregation (SCA), which aggregates confidence over multiple high-probability sampled responses. SCA achieves state-of-the-art calibration performance under mixed-answer settings while preserving strong calibration on single-answer questions.

Yuhan Wang, Shiyu Ni, Zhikai Ding, Zihang Zhan, Yuanzi Li, Keping Bi• 2026

Related benchmarks

TaskDatasetResultRank
Confidence calibrationMACE (test)
AUROC81.2
84
Model CalibrationMACE
AUROC82
84
LLM CalibrationMACE--
60
Showing 3 of 3 rows

Other info

Follow for update