Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation

About

Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users' emotions and generate empathetic responses. However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal information through feature concatenation. In order to explore a more effective way of utilizing both multimodal and long-distance contextual information, we propose a new model based on multimodal fused graph convolutional network, MMGCN, in this work. MMGCN can not only make use of multimodal dependencies effectively, but also leverage speaker information to model inter-speaker and intra-speaker dependency. We evaluate our proposed model on two public benchmark datasets, IEMOCAP and MELD, and the results prove the effectiveness of MMGCN, which outperforms other SOTA methods by a significant margin under the multimodal conversation setting.

Jingwen Hu, Yuchen Liu, Jinming Zhao, Qin Jin• 2021

Related benchmarks

TaskDatasetResultRank
Emotion Recognition in ConversationIEMOCAP (test)
Weighted Average F1 Score66.26
154
Conversational Emotion RecognitionIEMOCAP
Weighted Average F1 Score66.2
129
Emotion Recognition in ConversationMELD (test)
Weighted F158.65
118
Emotion RecognitionIEMOCAP
Accuracy66.36
71
Multimodal Emotion Recognition in ConversationMELD standard (test)
WF165.21
38
Emotion ClassificationIEMOCAP (test)--
36
Emotion RecognitionM3ED (val)
Weighted F156.67
35
Emotion RecognitionM3ED (test)
Weighted F151.18
35
Multimodal Emotion Recognition in ConversationIEMOCAP 6-class (test)
Weighted F1 Score (WF1)66.81
33
Emotion DetectionMELD (test)
Weighted-F10.5865
32
Showing 10 of 27 rows

Other info

Follow for update