Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

About

We introduce a method to measure uncertainty in large language models. For tasks like question answering, it is essential to know when we can trust the natural language outputs of foundation models. We show that measuring uncertainty in natural language is challenging because of "semantic equivalence" -- different sentences can mean the same thing. To overcome these challenges we introduce semantic entropy -- an entropy which incorporates linguistic invariances created by shared meanings. Our method is unsupervised, uses only a single model, and requires no modifications to off-the-shelf language models. In comprehensive ablation studies we show that the semantic entropy is more predictive of model accuracy on question answering data sets than comparable baselines.

Lorenz Kuhn, Yarin Gal, Sebastian Farquhar• 2023

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.69
265
Hallucination DetectionTriviaQA (test)
AUC-ROC71
169
Hallucination DetectionHaluEval (test)
AUC-ROC61.57
126
Hallucination DetectionHotpotQA
AUROC0.68
118
Question AnsweringTriviaQA
EM70.2
116
Confidence calibrationMACE (test)
AUROC78.9
84
Model CalibrationMACE
AUROC77.3
84
Hallucination DetectionNQ (test)
AUC ROC69
84
Uncertainty EstimationTriviaQA (test)
AUROC81.4
78
Uncertainty EstimationJudgeBench (test)
AUROC66.87
77
Showing 10 of 90 rows
...

Other info

Follow for update