Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Weight Uncertainty in Neural Networks

About

We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.

Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra• 2015

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10
Accuracy89.98
507
Image ClassificationFashionMNIST (test)
Accuracy91.03
260
Out-of-Distribution DetectionCIFAR-10 ID CIFAR-100 OOD
AUC71.77
66
Out-of-Distribution DetectionSVHN
AUROC92.14
62
Out-of-Distribution DetectionFashionMNIST (ID) vs MNIST (OoD)
AUROC0.931
61
Out-of-Distribution DetectionSVHN TinyImageNet in-distribution out-of-distribution (test)
AUROC68.05
46
Diabetic Retinopathy DiagnosisAPTOS 2019 (Population Shift)
AUC93.8
36
Diabetic Retinopathy DiagnosisEyePACS In-Domain
AUC88.2
36
Image ClassificationCIFAR10 Corrupted
Accuracy79.36
20
Out-of-Distribution Detection6 OOD detection tasks (CIFAR-10, SVHN, mini-ImageNet) (test)
Rank AUROC Score8
16
Showing 10 of 33 rows

Other info

Follow for update