Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

About

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.

Devamanyu Hazarika, Roger Zimmermann, Soujanya Poria• 2020

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI (test)
F183.6
238
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score85.3
206
Semantic segmentationNYU V2
mIoU43.4
74
3D Object DetectionSUN RGB-D (test)
mAP@0.2556.7
64
Multimodal Sentiment AnalysisCMU-MOSI
MAE0.783
59
Multimodal Sentiment AnalysisMOSEI (test)
MAE0.555
49
Sentiment AnalysisCMU-MOSEI (test)
Acc (2-class)85.5
40
Multimodal Sentiment AnalysisMOSI (test)
MAE0.783
34
Multimodal Emotion RecognitionCMU-MOSI
ACC742.3
31
Multimodal Emotion RecognitionCMU-MOSEI (test)
ACC70.522
30
Showing 10 of 29 rows

Other info

Follow for update