Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Robust Multimodal Sentiment Analysis with Incomplete Data

About

The field of Multimodal Sentiment Analysis (MSA) has recently witnessed an emerging direction seeking to tackle the issue of data incompleteness. Recognizing that the language modality typically contains dense sentiment information, we consider it as the dominant modality and present an innovative Language-dominated Noise-resistant Learning Network (LNLN) to achieve robust MSA. The proposed LNLN features a dominant modality correction (DMC) module and dominant modality based multimodal learning (DMML) module, which enhances the model's robustness across various noise scenarios by ensuring the quality of dominant modality representations. Aside from the methodical design, we perform comprehensive experiments under random data missing scenarios, utilizing diverse and meaningful settings on several popular datasets (\textit{e.g.,} MOSI, MOSEI, and SIMS), providing additional uniformity, transparency, and fairness compared to existing evaluations in the literature. Empirically, LNLN consistently outperforms existing baselines, demonstrating superior performance across these challenging and extensive evaluation metrics.

Haoyu Zhang, Wenbin Wang, Tianshu Yu• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI (test)
F185.2
238
Multimodal Sentiment AnalysisCMU-MOSI v1 (test)
Accuracy (2-Class)81.1
64
Multimodal Sentiment AnalysisCMU-MOSI 43 (test)
2-Class Accuracy81.1
56
Showing 3 of 3 rows

Other info

Follow for update