Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Loss Learning for Speech Emotion Recognition with Energy-Adaptive Mixup and Frame-Level Attention

About

Speech emotion recognition (SER) is an important technology in human-computer interaction. However, achieving high performance is challenging due to emotional complexity and scarce annotated data. To tackle these challenges, we propose a multi-loss learning (MLL) framework integrating an energy-adaptive mixup (EAM) method and a frame-level attention module (FLAM). The EAM method leverages SNR-based augmentation to generate diverse speech samples capturing subtle emotional variations. FLAM enhances frame-level feature extraction for multi-frame emotional cues. Our MLL strategy combines Kullback-Leibler divergence, focal, center, and supervised contrastive loss to optimize learning, address class imbalance, and improve feature separability. We evaluate our method on four widely used SER datasets: IEMOCAP, MSP-IMPROV, RAVDESS, and SAVEE. The results demonstrate our method achieves state-of-the-art performance, suggesting its effectiveness and robustness.

Cong Wang, Yizhong Geng, Yuhua Wen, Qifei Li, Yingming Gao, Ruimin Wang, Chunfeng Wang, Hao Li, Ya Li, Wei Chen• 2025

Related benchmarks

TaskDatasetResultRank
Speech Emotion RecognitionIEMOCAP Speaker-independent 5-fold cross-validation
WA78.47
19
Speech Emotion RecognitionRAVDESS (6-fold subject-independent cross-validation)
Weighted Accuracy (WA)93.4
8
Speech Emotion RecognitionMSP-IMPROV (6-fold session-independent cross-validation)
Weighted Accuracy58.55
7
Showing 3 of 3 rows

Other info

Follow for update