Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Factorized Multimodal Representations

About

Learning multimodal representations is a fundamentally complex research problem due to the presence of multiple heterogeneous sources of information. Although the presence of multiple modalities provides additional valuable information, there are two key challenges to address when learning from multimodal data: 1) models must learn the complex intra-modal and cross-modal interactions for prediction and 2) models must be robust to unexpected missing or noisy modalities during testing. In this paper, we propose to optimize for a joint generative-discriminative objective across multimodal data and labels. We introduce a model that factorizes representations into two sets of independent factors: multimodal discriminative and modality-specific generative factors. Multimodal discriminative factors are shared across all modalities and contain joint multimodal features required for discriminative tasks such as sentiment prediction. Modality-specific generative factors are unique for each modality and contain the information required for generating data. Experimental results show that our model is able to learn meaningful multimodal representations that achieve state-of-the-art or competitive performance on six multimodal datasets. Our model demonstrates flexible generative capabilities by conditioning on independent factors and can reconstruct missing modalities without significantly impacting performance. Lastly, we interpret our factorized representations to understand the interactions that influence multimodal learning.

Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov• 2018

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score83.4
332
Multimodal Sentiment AnalysisCMU-MOSI (test)
F181.6
316
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)80
144
Emotion RecognitionIEMOCAP--
115
Multimodal Sentiment AnalysisCH-SIMS (test)
F1 Score75.58
108
Sentiment AnalysisCMU-MOSEI (test)
F1 Score84.4
96
Multimodal Sentiment AnalysisMOSEI (test)
MAE0.568
49
Multimodal Emotion RecognitionCMU-MOSI (test)
ACC736.2
47
Multimodal Sentiment AnalysisMOSI (test)
MAE0.877
34
Multimodal Emotion RecognitionCMU-MOSI
ACC736.2
31
Showing 10 of 16 rows

Other info

Follow for update