Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
About
Representation Learning is a significant and challenging task in multimodal learning. Effective modality representations should contain two parts of characteristics: the consistency and the difference. Due to the unified multimodal annotation, existing methods are restricted in capturing differentiated information. However, additional uni-modal annotations are high time- and labor-cost. In this paper, we design a label generation module based on the self-supervised learning strategy to acquire independent unimodal supervisions. Then, joint training the multi-modal and uni-modal tasks to learn the consistency and difference, respectively. Moreover, during the training stage, we design a weight-adjustment strategy to balance the learning progress among different subtasks. That is to guide the subtasks to focus on samples with a larger difference between modality supervisions. Last, we conduct extensive experiments on three public multimodal baseline datasets. The experimental results validate the reliability and stability of auto-generated unimodal supervisions. On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods. On the SIMS dataset, our method achieves comparable performance than human-annotated unimodal labels. The full codes are available at https://github.com/thuiar/Self-MM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Sentiment Analysis | CMU-MOSEI (test) | F1 Score85.2 | 332 | |
| Multimodal Sentiment Analysis | CMU-MOSI (test) | F185.95 | 316 | |
| Multimodal Sentiment Analysis | MOSEI | MAE0.53 | 168 | |
| Multimodal Sentiment Analysis | CMU-MOSI | Accuracy (2-Class)84.9 | 144 | |
| Emotion Recognition | IEMOCAP | -- | 115 | |
| Multimodal Sentiment Analysis | CH-SIMS (test) | F1 Score80.44 | 108 | |
| Multimodal Sentiment Analysis | SIMS (test) | Accuracy (2-Class)78 | 78 | |
| Multimodal Sentiment Analysis | MOSI | Accuracy84.8 | 72 | |
| Multimodal Emotion Recognition | CMU-MOSEI (test) | ACC753.2 | 56 | |
| Multimodal Sentiment Analysis | MOSEI (test) | MAE0.529 | 49 |