Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Compositionality of Neural Networks by Decoding Representations to Inputs

About

In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together. Although deep learning programs have demonstrated strong performance on novel applications, they sacrifice many of the functionalities of traditional software programs. With this as motivation, we take a modest first step towards improving deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs. We call this design a Decodable Neural Network, or DecNN. Doing so enables a form of compositionality in neural networks, where one can recursively compose DecNN with itself to create an ensemble-like model with uncertainty. In our experiments, we demonstrate applications of this uncertainty to out-of-distribution detection, adversarial example detection, and calibration -- while matching standard neural networks in accuracy. We further explore this compositionality by combining DecNN with pretrained models, where we show promising results that neural networks can be regularized from using protected features.

Mike Wu, Noah Goodman, Stefano Ermon• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy97.8
882
Image ClassificationFashion MNIST (test)
Accuracy88.8
568
Out-of-Distribution DetectionFashionMNIST (ID) vs MNIST (OoD)
AUROC0.793
61
Image ClassificationCelebA (test)
Accuracy90.8
37
CalibrationMNIST
ECE0.33
33
Out-of-Distribution DetectionMNIST vs FASHIONMNIST (test)
AUROC0.812
27
One-class classificationCelebA (test)
AUC69.2
24
Misclassification DetectionMNIST (test)
AUROC0.869
13
Misclassification DetectionFashion MNIST (test)
ROC-AUC0.795
12
Speech ClassificationAudioMNIST (test)
Accuracy93.8
12
Showing 10 of 24 rows

Other info

Follow for update