Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition

About

Emotion recognition in conversation (ERC) aims to analyze the speaker's state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.

Zaijing Li, Fengxiao Tang, Ming Zhao, Yusen Zhu• 2022

Related benchmarks

TaskDatasetResultRank
Emotion Recognition in ConversationIEMOCAP (test)--
154
Emotion Recognition in ConversationMELD
Weighted Avg F164
137
Conversational Emotion RecognitionIEMOCAP
Weighted Average F1 Score69.49
129
Emotion RecognitionIEMOCAP--
71
Emotion ClassificationIEMOCAP (test)--
36
Emotion DetectionMELD (test)
Weighted-F10.64
32
Multimodal Emotion RecognitionIEMOCAP 6-way
F1 (Avg)69.3
28
Emotion Recognition in ConversationMELD
Average Accuracy64
8
Showing 8 of 8 rows

Other info

Follow for update