Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhance-then-Balance Modality Collaboration for Robust Multimodal Sentiment Analysis

About

Multimodal sentiment analysis (MSA) integrates heterogeneous text, audio, and visual signals to infer human emotions. While recent approaches leverage cross-modal complementarity, they often struggle to fully utilize weaker modalities. In practice, dominant modalities tend to overshadow non-verbal ones, inducing modality competition and limiting overall contributions. This imbalance degrades fusion performance and robustness under noisy or missing modalities. To address this, we propose a novel model, Enhance-then-Balance Modality Collaboration framework (EBMC). EBMC improves representation quality via semantic disentanglement and cross-modal enhancement, strengthening weaker modalities. To prevent dominant modalities from overwhelming others, an Energy-guided Modality Coordination mechanism achieves implicit gradient rebalancing via a differentiable equilibrium objective. Furthermore, Instance-aware Modality Trust Distillation estimates sample-level reliability to adaptively modulate fusion weights, ensuring robustness. Extensive experiments demonstrate that EBMC achieves state-of-the-art or competitive results and maintains strong performance under missing-modality settings.

Kang He, Yuzhe Ding, Xinrong Wang, Fei Li, Chong Teng, Donghong Ji• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)86.26
144
Emotion RecognitionIEMOCAP--
115
Showing 2 of 2 rows

Other info

Follow for update