Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Progressive Representation Learning for Multimodal Sentiment Analysis with Incomplete Modalities

About

Multimodal Sentiment Analysis (MSA) seeks to infer human emotions by integrating textual, acoustic, and visual cues. However, existing approaches often rely on all modalities are completeness, whereas real-world applications frequently encounter noise, hardware failures, or privacy restrictions that result in missing modalities. There exists a significant feature misalignment between incomplete and complete modalities, and directly fusing them may even distort the well-learned representations of the intact modalities. To this end, we propose PRLF, a Progressive Representation Learning Framework designed for MSA under uncertain missing-modality conditions. PRLF introduces an Adaptive Modality Reliability Estimator (AMRE), which dynamically quantifies the reliability of each modality using recognition confidence and Fisher information to determine the dominant modality. In addition, the Progressive Interaction (ProgInteract) module iteratively aligns the other modalities with the dominant one, thereby enhancing cross-modal consistency while suppressing noise. Extensive experiments on CMU-MOSI, CMU-MOSEI, and SIMS verify that PRLF outperforms state-of-the-art methods across both inter- and intra-modality missing scenarios, demonstrating its robustness and generalization capability.

Jindi Bao, Jianjun Qian, Mengkai Yan, Jian Yang• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI--
144
Multimodal Sentiment AnalysisSIMS (test)
Accuracy (2-Class)82.58
78
Showing 2 of 2 rows

Other info

Follow for update