Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Variational Information Bottleneck

About

We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method "Deep Variational Information Bottleneck", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.

Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy• 2016

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)--
539
Image ClassificationCINIC-10 (test)--
177
Out-of-Distribution DetectionCIFAR-10 vs SVHN (test)
AUROC0.97
101
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100 (test)
AUROC88
93
Image ClassificationCIFAR-10N (Worst)
Accuracy78.88
78
Image ClassificationCIFAR-10N (Aggregate)
Accuracy86.11
74
Out-of-Distribution DetectionCIFAR-10 in-distribution LSUN out-of-distribution (test)
AUROC96
73
Named Entity RecognitionWNUT 2017 (test)
F1 Score51.6
63
Image ClassificationCIFAR-10-C (test)--
61
Image ClassificationCIFAR-100-C v1 (test)
Error Rate (Average)42.2
60
Showing 10 of 49 rows

Other info

Follow for update