Multi-view Information Bottleneck Without Variational Approximation
About
By "intelligently" fusing the complementary information across different views, multi-view learning is able to improve the performance of classification tasks. In this work, we extend the information bottleneck principle to a supervised multi-view learning scenario and use the recently proposed matrix-based R{\'e}nyi's $\alpha$-order entropy functional to optimize the resulting objective directly, without the necessity of variational approximation or adversarial training. Empirical results in both synthetic and real-world datasets suggest that our method enjoys improved robustness to noise and redundant information in each view, especially given limited training samples. Code is available at~\url{https://github.com/archy666/MEIB}.
Qi Zhang, Shujian Yu, Jingmin Xin, Badong Chen• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Sentiment Analysis | CMU-MOSI (test) | F160.8 | 238 | |
| Sentiment Analysis | CMU-MOSEI (test) | Acc (2-class)63.2 | 40 | |
| Multimodal regression | Superconductivity (test) | RMSE14.04 | 13 | |
| Regression | Brain-Age | MAE7.83 | 6 | |
| Multivariate Regression | Vision&Touch | MSE6.19 | 6 | |
| Multimodal regression | CT Slices (test) | RMSE1.258 | 5 | |
| Regression | Bimodal MNIST (test) | MAE10.17 | 5 |
Showing 7 of 7 rows