Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Leveraged Mel spectrograms using Harmonic and Percussive Components in Speech Emotion Recognition

About

Speech Emotion Recognition (SER) affective technology enables the intelligent embedded devices to interact with sensitivity. Similarly, call centre employees recognise customers' emotions from their pitch, energy, and tone of voice so as to modify their speech for a high-quality interaction with customers. This work explores, for the first time, the effects of the harmonic and percussive components of Mel spectrograms in SER. We attempt to leverage the Mel spectrogram by decomposing distinguishable acoustic features for exploitation in our proposed architecture, which includes a novel feature map generator algorithm, a CNN-based network feature extractor and a multi-layer perceptron (MLP) classifier. This study specifically focuses on effective data augmentation techniques for building an enriched hybrid-based feature map. This process results in a function that outputs a 2D image so that it can be used as input data for a pre-trained CNN-VGG16 feature extractor. Furthermore, we also investigate other acoustic features such as MFCCs, chromagram, spectral contrast, and the tonnetz to assess our proposed framework. A test accuracy of 92.79% on the Berlin EMO-DB database is achieved. Our result is higher than previous works using CNN-VGG16.

David Hason Rudd, Huan Huo, Guandong Xu• 2023

Related benchmarks

TaskDatasetResultRank
Speech Emotion RecognitionEMO-DB and RAVDESS
Accuracy92.79
16
Showing 1 of 1 rows

Other info

Follow for update