Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation

About

We introduce a method to measure uncertainty in large language models. For tasks like question answering, it is essential to know when we can trust the natural language outputs of foundation models. We show that measuring uncertainty in natural language is challenging because of "semantic equivalence" -- different sentences can mean the same thing. To overcome these challenges we introduce semantic entropy -- an entropy which incorporates linguistic invariances created by shared meanings. Our method is unsupervised, uses only a single model, and requires no modifications to off-the-shelf language models. In comprehensive ablation studies we show that the semantic entropy is more predictive of model accuracy on question answering data sets than comparable baselines.

Lorenz Kuhn, Yarin Gal, Sebastian Farquhar• 2023

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.737
438
Hallucination DetectionTriviaQA (test)
AUC-ROC71
183
Question AnsweringTriviaQA
EM70.2
182
Hallucination DetectionHotpotQA
AUROC0.68
163
Hallucination DetectionHaluEval (test)
AUC-ROC61.57
126
Hallucination DetectionCSQA
AUROC64
107
Uncertainty EstimationTriviaQA (test)
AUROC81.4
104
Hallucination DetectionTruthfulQA
AUC (ROC)0.656
102
Hallucination DetectionCoQA
Mean AUROC0.79
100
Hallucination DetectionGSM8K
AUROC72.51
93
Showing 10 of 130 rows
...

Other info

Follow for update