Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ConfTuner: Training Large Language Models to Express Their Confidence Verbally

About

Large Language Models (LLMs) are increasingly deployed in high-stakes domains such as science, law, and healthcare, where accurate expressions of uncertainty are essential for reliability and trust. However, current LLMs are often observed to generate incorrect answers with high confidence, a phenomenon known as "overconfidence". Recent efforts have focused on calibrating LLMs' verbalized confidence: i.e., their expressions of confidence in text form, such as "I am 80% confident that...". Existing approaches either rely on prompt engineering or fine-tuning with heuristically generated uncertainty estimates, both of which have limited effectiveness and generalizability. Motivated by the notion of proper scoring rules for calibration in classical machine learning models, we introduce ConfTuner, a simple and efficient fine-tuning method that introduces minimal overhead and does not require ground-truth confidence scores or proxy confidence estimates. ConfTuner relies on a new loss function, tokenized Brier score, which we theoretically prove to be a proper scoring rule, intuitively meaning that it "correctly incentivizes the model to report its true probability of being correct". ConfTuner improves calibration across diverse reasoning tasks and generalizes to black-box models such as GPT-4o. Our results further show that better-calibrated confidence enables downstream gains in self-correction and model cascade, advancing the development of trustworthy LLM systems. The code is available at https://github.com/liushiliushi/ConfTuner.

Yibo Li, Miao Xiong, Jiaying Wu, Bryan Hooi• 2025

Related benchmarks

TaskDatasetResultRank
CalibrationNQ
ECE0.428
55
Confidence Calibration in Retrieval-Augmented GenerationBamboogle k=5 OOD (test)
ECE0.127
24
CalibrationBamboogle
ECE0.214
24
CalibrationHotpotQA
ECE0.38
24
CalibrationAverage StrategyQA, HotpotQA, NQ, Bamboogle
ECE0.352
24
Confidence Calibration in Retrieval-Augmented GenerationNQ k=5 OOD (test)
ECE0.325
24
CalibrationStrategyQA
ECE0.368
24
Showing 7 of 7 rows

Other info

Follow for update