Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Survey on Uncertainty Quantification Methods for Deep Learning

About

Deep neural networks (DNNs) have achieved tremendous success in computer vision, natural language processing, and scientific and engineering domains. However, DNNs can make unexpected, incorrect, yet overconfident predictions, leading to serious consequences in high-stakes applications such as autonomous driving, medical diagnosis, and disaster response. Uncertainty quantification (UQ) estimates the confidence of DNN predictions in addition to their accuracy. In recent years, many UQ methods have been developed for DNNs. It is valuable to systematically categorize these methods and compare their strengths and limitations. Existing surveys mostly categorize UQ methodologies by neural network architecture or Bayesian formulation, while overlooking the uncertainty sources each method addresses, making it difficult to select an appropriate approach in practice. To fill this gap, this paper presents a taxonomy of UQ methods for DNNs based on uncertainty sources (e.g., data versus model uncertainty). We summarize the advantages and disadvantages of each category, and illustrate how UQ can be applied to machine learning problems (e.g., active learning, out-of-distribution robustness, and deep reinforcement learning). We also identify future research directions, including UQ for large language models (LLMs), AI-driven scientific simulations, and deep neural networks with structured outputs.

Wenchong He, Zhe Jiang, Tingsong Xiao, Zelin Xu, Yukun Li• 2023

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100
AUROC78.02
70
Image ClassificationCIFAR-10
Accuracy86.7
10
Showing 2 of 2 rows

Other info

Follow for update