Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models

About

Large language models (LLMs) specializing in natural language generation (NLG) have recently started exhibiting promising capabilities across a variety of domains. However, gauging the trustworthiness of responses generated by LLMs remains an open challenge, with limited research on uncertainty quantification (UQ) for NLG. Furthermore, existing literature typically assumes white-box access to language models, which is becoming unrealistic either due to the closed-source nature of the latest LLMs or computational constraints. In this work, we investigate UQ in NLG for *black-box* LLMs. We first differentiate *uncertainty* vs *confidence*: the former refers to the ``dispersion'' of the potential predictions for a fixed input, and the latter refers to the confidence on a particular prediction/generation. We then propose and compare several confidence/uncertainty measures, applying them to *selective NLG* where unreliable results could either be ignored or yielded for further assessment. Experiments were carried out with several popular LLMs on question-answering datasets (for evaluation purposes). Results reveal that a simple measure for the semantic dispersion can be a reliable predictor of the quality of LLM responses, providing valuable insights for practitioners on uncertainty management when adopting LLMs. The code to replicate our experiments is available at https://github.com/zlin7/UQ-NLG.

Zhen Lin, Shubhendu Trivedi, Jimeng Sun• 2023

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.7102
438
Hallucination DetectionTriviaQA (test)
AUC-ROC71.02
183
Radiology Report GenerationMIMIC-CXR (test)--
172
Hallucination DetectionHotpotQA
AUROC0.55
163
Uncertainty QuantificationAverage of 6 datasets
PRR43.7
120
Hallucination DetectionCSQA
AUROC62.94
107
Hallucination DetectionTruthfulQA
AUC (ROC)0.5881
102
Hallucination DetectionCoQA
Mean AUROC0.679
100
Hallucination DetectionGSM8K
AUROC73.66
93
Question AnsweringNQ (test)
AUROC82.6
90
Showing 10 of 65 rows

Other info

Follow for update