Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis

About

Representation Learning is a significant and challenging task in multimodal learning. Effective modality representations should contain two parts of characteristics: the consistency and the difference. Due to the unified multimodal annotation, existing methods are restricted in capturing differentiated information. However, additional uni-modal annotations are high time- and labor-cost. In this paper, we design a label generation module based on the self-supervised learning strategy to acquire independent unimodal supervisions. Then, joint training the multi-modal and uni-modal tasks to learn the consistency and difference, respectively. Moreover, during the training stage, we design a weight-adjustment strategy to balance the learning progress among different subtasks. That is to guide the subtasks to focus on samples with a larger difference between modality supervisions. Last, we conduct extensive experiments on three public multimodal baseline datasets. The experimental results validate the reliability and stability of auto-generated unimodal supervisions. On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods. On the SIMS dataset, our method achieves comparable performance than human-annotated unimodal labels. The full codes are available at https://github.com/thuiar/Self-MM.

Wenmeng Yu, Hua Xu, Ziqi Yuan, Jiele Wu• 2021

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score85.2
332
Multimodal Sentiment AnalysisCMU-MOSI (test)
F185.95
316
Multimodal Sentiment AnalysisMOSEI
MAE0.53
168
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)84.9
144
Emotion RecognitionIEMOCAP--
115
Multimodal Sentiment AnalysisCH-SIMS (test)
F1 Score80.44
108
Multimodal Sentiment AnalysisSIMS (test)
Accuracy (2-Class)78
78
Multimodal Sentiment AnalysisMOSI
Accuracy84.8
72
Multimodal Emotion RecognitionCMU-MOSEI (test)
ACC753.2
56
Multimodal Sentiment AnalysisMOSEI (test)
MAE0.529
49
Showing 10 of 34 rows

Other info

Follow for update