Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-modal Prompting for Balanced Incomplete Multi-modal Emotion Recognition

About

Incomplete multi-modal emotion recognition (IMER) aims at understanding human intentions and sentiments by comprehensively exploring the partially observed multi-source data. Although the multi-modal data is expected to provide more abundant information, the performance gap and modality under-optimization problem hinder effective multi-modal learning in practice, and are exacerbated in the confrontation of the missing data. To address this issue, we devise a novel Cross-modal Prompting (ComP) method, which emphasizes coherent information by enhancing modality-specific features and improves the overall recognition accuracy by boosting each modality's performance. Specifically, a progressive prompt generation module with a dynamic gradient modulator is proposed to produce concise and consistent modality semantic cues. Meanwhile, cross-modal knowledge propagation selectively amplifies the consistent information in modality features with the delivered prompts to enhance the discrimination of the modality-specific output. Additionally, a coordinator is designed to dynamically re-weight the modality outputs as a complement to the balance strategy to improve the model's efficacy. Extensive experiments on 4 datasets with 7 SOTA methods under different missing rates validate the effectiveness of our proposed method.

Wen-Jue He, Xiaofeng Zhu, Zheng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score86.48
206
Multimodal Sentiment AnalysisCMU-MOSI standard (test)
Accuracy87.2
62
Emotion RecognitionIEMOCAP 4-class (test)
WAR80.66
46
Emotion RecognitionIEMOCAPSix (test)
Accuracy62.02
35
Showing 4 of 4 rows

Other info

Follow for update