Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

About

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.

Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek• 2019

Related benchmarks

TaskDatasetResultRank
Multi-view ClassificationHMDB (test)
Accuracy42.63
14
Multi-view ClassificationCUB (test)
Accuracy76.08
14
Multi-view ClassificationPIE (test)
Accuracy64.65
14
Multi-view ClassificationCaltech101 (test)
Accuracy73.45
14
Multi-view ClassificationHandwritten in-domain (test)
Test Accuracy99.25
6
Multi-view ClassificationCIFAR10 Corrupted (test)
Test Accuracy74.76
6
Multi-view ClassificationCUB in-domain (test)
Test Accuracy92.33
6
Multi-view ClassificationScene15 in-domain (test)
Test Accuracy0.7175
6
Multi-view ClassificationHMDB in-domain (test)
Accuracy71.68
6
Out-of-Distribution DetectionCIFAR10-C vs SVHN OOD (test)
AUROC78.8
6
Showing 10 of 15 rows

Other info

Follow for update