Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal Nuances

About

Emotion recognition in speech is a challenging multimodal task that requires understanding both verbal content and vocal nuances. This paper introduces a novel approach to emotion detection using Large Language Models (LLMs), which have demonstrated exceptional capabilities in natural language understanding. To overcome the inherent limitation of LLMs in processing audio inputs, we propose SpeechCueLLM, a method that translates speech characteristics into natural language descriptions, allowing LLMs to perform multimodal emotion analysis via text prompts without any architectural changes. Our method is minimal yet impactful, outperforming baseline models that require structural modifications. We evaluate SpeechCueLLM on two datasets: IEMOCAP and MELD, showing significant improvements in emotion recognition accuracy, particularly for high-quality audio data. We also explore the effectiveness of various feature representations and fine-tuning strategies for different LLMs. Our experiments demonstrate that incorporating speech descriptions yields a more than 2% increase in the average weighted F1 score on IEMOCAP (from 70.111% to 72.596%).

Zehui Wu, Ziwei Gong, Lin Ai, Pengyuan Shi, Kaan Donbekci, Julia Hirschberg• 2024

Related benchmarks

TaskDatasetResultRank
Emotion RecognitionIEMOCAP
Accuracy60.07
71
Emotion ClassificationIEMOCAP (test)
Weighted-F172.18
36
Emotion RecognitionMELD (test)--
26
Speech Emotion RecognitionIEMOCAP → MELD Cross-Domain
Weighted F155.16
14
Speech Emotion RecognitionMELD → IEMOCAP Cross-Domain
Weighted F144.79
14
Emotion RecognitionMELD
UACC56.74
12
Speech Emotion RecognitionASVP-ESD Mixlingual
Weighted F10.6812
8
Human evaluation of prosodic grounding and reasoning qualityIEMOCAP and MELD (test)
Evaluator 1 Score3.12
2
Showing 8 of 8 rows

Other info

Follow for update