Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
About
This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches.
Jean-Benoit Delbrouck, No\'e Tits, St\'ephane Dupont• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Emotion Detection | MELD (test) | -- | 32 | |
| Emotion Recognition | CMU-MOSEI (test) | -- | 19 | |
| Binary Sentiment Classification | MOSEI (test) | Accuracy82 | 16 | |
| 4-emotion classification | IEMOCAP | F1 Score74 | 9 | |
| Sentiment Classification | MOSI (test) | Weighted F1 Score80 | 8 |
Showing 5 of 5 rows