Weight Uncertainty in Neural Networks
About
We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra• 2015
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 | Accuracy89.98 | 507 | |
| Image Classification | FashionMNIST (test) | Accuracy91.03 | 218 | |
| Out-of-Distribution Detection | SVHN | AUROC92.14 | 62 | |
| Out-of-Distribution Detection | FashionMNIST (ID) vs MNIST (OoD) | AUROC0.931 | 61 | |
| Out-of-Distribution Detection | SVHN TinyImageNet in-distribution out-of-distribution (test) | AUROC68.05 | 46 | |
| Diabetic Retinopathy Diagnosis | APTOS 2019 (Population Shift) | AUC93.8 | 36 | |
| Diabetic Retinopathy Diagnosis | EyePACS In-Domain | AUC88.2 | 36 | |
| Image Classification | CIFAR10 Corrupted | Accuracy79.36 | 20 | |
| Image Classification | CIFAR10 0-4 (test) | Accuracy84.05 | 12 | |
| Image Classification | CIFAR100 0-49 In-Distribution (test) | Accuracy56.02 | 12 |
Showing 10 of 25 rows