Understanding the Limitations of Variational Mutual Information Estimators
About
Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mutual Information Estimation | Gaussian d=20 (test) | Bias0.25 | 60 | |
| Mutual Information Estimation | Gaussian d=5, N=64 | Variance0.05 | 42 | |
| Mutual Information Estimation | Gaussian setting d=20, N=64 | Bias0.18 | 30 | |
| Mutual Information Estimation | Gaussian (d=10, N=64) | Bias0.24 | 30 | |
| Domain Adaptation | MNIST to MNIST-M (test) | -- | 24 | |
| Learning label-irrelevant representations | CMU-PIE cropped | Latency (sec./max step)0.621 | 7 | |
| Disentangled Representation Learning | Dsprite | MSE0.65 | 7 | |
| Mutual Information Estimation | Gaussian (d=5, N=64, MI=6) | Variance0.32 | 6 | |
| Mutual Information Estimation | Gaussian d=5, N=64, MI=8 | Variance1.4 | 6 | |
| Mutual Information Estimation | Gaussian (d=5, N=64, MI=10) | Variance8.89 | 6 |