Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ERM-MinMaxGAP: Benchmarking and Mitigating Gender Bias in Multilingual Multimodal Speech-LLM Emotion Recognition

About

Speech emotion recognition (SER) systems can exhibit gender-related performance disparities, but how such bias manifests in multilingual speech LLMs across languages and modalities is unclear. We introduce a novel multilingual, multimodal benchmark built on MELD-ST, spanning English, Japanese, and German, to quantify language-specific SER performance and gender gaps. We find bias is strongly language-dependent, and multimodal fusion does not reliably improve fairness. To address these, we propose ERM-MinMaxGAP, a fairness-informed training objective, which augments empirical risk minimization (ERM) with a proposed adaptive fairness weight mechanism and a novel MinMaxGAP regularizer on the maximum male-female loss gap within each language and modality. Building upon the Qwen2-Audio backbone, our ERM-MinMaxGAP approach improves multilingual SER performance by 5.5% and 5.0% while reducing the overall gender bias gap by 0.1% and 1.4% in the unimodal and multimodal settings, respectively.

Zi Haur Pang, Xiaoxue Gao, Tatsuya Kawahara, Nancy F. Chen• 2026

Related benchmarks

TaskDatasetResultRank
Speech Emotion RecognitionMELD-ST Multilingual (test)
W-F1 (SER)68.13
24
Speech Emotion RecognitionMELD-ST (DEU) (test)
W-F1 Score53.32
12
Speech Emotion RecognitionMELD-ST JPN (test)
SER Weighted F1 Score51.58
12
Showing 3 of 3 rows

Other info

Follow for update