Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mingling or Misalignment? Temporal Shift for Speech Emotion Recognition with Pre-trained Representations

About

Fueled by recent advances of self-supervised models, pre-trained speech representations proved effective for the downstream speech emotion recognition (SER) task. Most prior works mainly focus on exploiting pre-trained representations and just adopt a linear head on top of the pre-trained model, neglecting the design of the downstream network. In this paper, we propose a temporal shift module to mingle channel-wise information without introducing any parameter or FLOP. With the temporal shift module, three designed baseline building blocks evolve into corresponding shift variants, i.e. ShiftCNN, ShiftLSTM, and Shiftformer. Moreover, to balance the trade-off between mingling and misalignment, we propose two technical strategies, placement of shift and proportion of shift. The family of temporal shift models all outperforms the state-of-the-art methods on the benchmark IEMOCAP dataset under both finetuning and feature extraction settings. Our code is available at https://github.com/ECNU-Cross-Innovation-Lab/ShiftSER.

Siyuan Shen, Feng Liu, Aimin Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Speech Emotion RecognitionIEMOCAP (test)
Accuracy72.7
20
Showing 1 of 1 rows

Other info

Follow for update