Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Factorized Multimodal Representations

About

Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.

Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov• 2018

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI (test)
F181.6
238
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score83.4
206
Emotion RecognitionIEMOCAP--
71
Multimodal Sentiment AnalysisMOSEI (test)
MAE0.568
49
Sentiment AnalysisCMU-MOSEI (test)
Acc (2-class)84.4
40
Multimodal Sentiment AnalysisMOSI (test)
MAE0.877
34
Multimodal Emotion RecognitionCMU-MOSI
ACC736.2
31
Multimodal Sentiment AnalysisCMU-MOSI Word Aligned (test)
Accuracy (7-Class)36.2
21
Multimodal Emotion RecognitionCMU-MOSI (test)
ACC736.2
21
Sentiment AnalysisCMU-MOSI
Accuracy (2-class)77.6
21
Showing 10 of 15 rows

Other info

Follow for update