A Theory of Multimodal Learning
About
Human perception of the empirical world involves recognizing the diverse appearances, or 'modalities', of underlying objects. Despite the longstanding consideration of this perspective in philosophy and cognitive science, the study of multimodality remains relatively under-explored within the field of machine learning. Nevertheless, current studies of multimodal machine learning are limited to empirical practices, lacking theoretical foundations beyond heuristic arguments. An intriguing finding from the practice of multimodal learning is that a model trained on multiple modalities can outperform a finely-tuned unimodal model, even on unimodal tasks. This paper provides a theoretical framework that explains this phenomenon, by studying generalization properties of multimodal learning algorithms. We demonstrate that multimodal learning allows for a superior generalization bound compared to unimodal learning, up to a factor of $O(\sqrt{n})$, where $n$ represents the sample size. Such advantage occurs when both connection and heterogeneity exist between the modalities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio-Visual Event Localization | AVE (test) | Accuracy67.41 | 54 | |
| Multimodal Classification | Kinetics-Sounds (test) | Multimodal Accuracy59.83 | 30 | |
| Multimodal Classification | CREMA-D | Accuracy60.26 | 28 | |
| Audio-Visual Event Classification | VGGSound (test) | Fusion Top-1 Acc60.8 | 23 | |
| Multimodal Classification | UR-FUNNY | Accuracy63.1 | 21 | |
| Sentiment analysis and emotion recognition | CMU-MOSEI (test) | Inference Time (s)0.279 | 5 |