Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

About

There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

Alex Kendall, Yarin Gal• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationNYU v2 (test)
mIoU37.3
248
Depth PredictionNYU Depth V2 (test)--
113
Out-of-Distribution DetectionCIFAR-10 (in-distribution) TinyImageNet (out-of-distribution) (test)
AUROC63.23
71
OOD DetectionCIFAR-10 (test)
AUROC64
40
Automatic Speech RecognitionSWITCHBOARD swbd
WER12.7
39
Out-of-Distribution DetectionMNIST (In-distribution) vs Fashion-MNIST (OOD) (test)
AUPR0.9277
36
Automatic Speech RecognitionCHiME-4 simu (test)
WER9.7
31
Dense Anomaly DetectionSMIYC AnomalyTrack
AP36.8
30
Automated Speech RecognitionTED-LIUM V3
WER4.7
26
RegressionBoston UCI (test)--
26
Showing 10 of 48 rows

Other info

Follow for update