Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations

About

Emotion Recognition in Conversations (ERC) has considerable prospects for developing empathetic machines. For multimodal ERC, it is vital to understand context and fuse modality information in conversations. Recent graph-based fusion methods generally aggregate multimodal information by exploring unimodal and cross-modal interactions in a graph. However, they accumulate redundant information at each layer, limiting the context understanding between modalities. In this paper, we propose a novel Multimodal Dynamic Fusion Network (MM-DFN) to recognize emotions by fully understanding multimodal conversational context. Specifically, we design a new graph-based dynamic fusion module to fuse multimodal contextual features in a conversation. The module reduces redundancy and enhances complementarity between modalities by capturing the dynamics of contextual information in different semantic spaces. Extensive experiments on two public benchmark datasets demonstrate the effectiveness and superiority of MM-DFN.

Dou Hu, Xiaolong Hou, Lingwei Wei, Lianxin Jiang, Yang Mo• 2022

Related benchmarks

TaskDatasetResultRank
Emotion Recognition in ConversationIEMOCAP (test)
Weighted Average F1 Score68.18
168
Emotion Recognition in ConversationMELD (test)
Weighted F159.46
143
Emotion RecognitionIEMOCAP
Accuracy68.21
115
Multimodal Emotion RecognitionIEMOCAP 6-way
F1 (Avg)67.4
106
Multimodal Emotion Recognition in ConversationMELD standard (test)
WF165.48
53
Multimodal Emotion Recognition in ConversationIEMOCAP 6-class (test)
Weighted F1 Score (WF1)68.83
44
Multimodal Emotion Recognition in ConversationMELD
Weighted Avg F1 Score59.46
36
Emotion ClassificationIEMOCAP (test)--
36
Emotion DetectionMELD (test)
Weighted-F10.5946
32
Multimodal Sentiment AnalysisCMU-MOSEI (0.3, 0.5, 0.7) (test)
Accuracy84.18
24
Showing 10 of 25 rows

Other info

Code

Follow for update