Hybrid Models with Deep and Invertible Features
About
We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets | features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Moreover the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open Set Recognition | CIFAR10 | AUROC0.655 | 76 | |
| Open Set Recognition | TinyImageNet | AUROC59.6 | 51 | |
| Open Set Recognition | SVHN | AUROC0.643 | 51 | |
| Open Set Recognition | CIFAR+50 | AUROC67.1 | 50 | |
| Open Set Recognition | CIFAR+10 | AUROC0.67 | 24 | |
| Open Set Recognition | MNIST | AUROC0.721 | 10 | |
| Image Classification | SVHN 1k labeled 72k unlabeled (test) | Accuracy95.74 | 8 | |
| Image Classification | MNIST 1k labeled 59k unlabeled (test) | Accuracy99.27 | 7 |