Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-attention fusion for audiovisual emotion recognition with incomplete data

About

In this paper, we consider the problem of multimodal data analysis with a use case of audiovisual emotion recognition. We propose an architecture capable of learning from raw data and describe three variants of it with distinct modality fusion mechanisms. While most of the previous works consider the ideal scenario of presence of both modalities at all times during inference, we evaluate the robustness of the model in the unconstrained settings where one modality is absent or noisy, and propose a method to mitigate these limitations in a form of modality dropout. Most importantly, we find that following this approach not only improves performance drastically under the absence/noisy representations of one modality, but also improves the performance in a standard ideal setting, outperforming the competing methods.

Kateryna Chumachenko, Alexandros Iosifidis, Moncef Gabbouj• 2022

Related benchmarks

TaskDatasetResultRank
Emotion RecognitionRAVDESS 7-class
WAR79.2
19
Emotion ClassificationCREMA-D
F1 (Macro)77.1
18
Emotion RecognitionRAVDESS (test)
Accuracy0.583
17
Audiovisual Emotion RecognitionRAVDESS
Accuracy (AV)81.58
11
Speech Emotion RecognitionRAVDESS (6-fold subject-independent cross-validation)
Weighted Accuracy (WA)79.2
8
Emotional Attribute PredictionMSP-IMPROV Audio-Visual
Arousal0.786
6
Emotional Attribute PredictionMSP-IMPROV Acoustic
Arousal0.745
6
Emotional Attribute PredictionMSP-IMPROV Visual
Arousal0.345
6
Audiovisual Sentiment AnalysisMOSEI
Accuracy (AV)67.19
3
Showing 9 of 9 rows

Other info

Code

Follow for update