Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MELD

Benchmarks

Task NameDataset NameSOTA ResultTrend
Emotion Recognition in ConversationMELD
Weighted Avg F169.15
137
Emotion Recognition in ConversationMELD (test)
Weighted F169.83
118
Multimodal Emotion Recognition in ConversationMELD standard (test)
WF166.71
38
Emotion DetectionMELD (test)
Weighted-F10.699
32
Emotion RecognitionMELD (test)
W-Avg F1 (7-cls)66.52
26
Emotion Recognition in ConversationMELD Standard (test)
Weighted F169.15
19
Speech Emotion RecognitionMELD
Accuracy63.5
19
Emotion Recognition in ConversationMELD 1.0 (test)
Weighted F165.61
17
Sentiment ClassificationMELD (test)
Accuracy68.5
15
Speech Emotion RecognitionMELD In-Domain v1 (test)
Accuracy54.06
14
Emotion RecognitionMELD
UACC64.34
12
Multimodal Emotion Recognition in ConversationMELD
Neutral Accuracy79.8
12
Conversational Emotion RecognitionMELD (test)
Macro F1 Score61.9
12
Emotion RecognitionMELD (held-out)
F1 Score71.1
8
Activation TaskMeld-S
AUAC63.5
8
Emotion Recognition in ConversationMELD
Average Accuracy64
8
Sentiment Analysis (SEN)MELD S
F1 (Binary Weighted)78.5
7
Emotion Recognition (EMO)MELD E
Mean Weighted Accuracy71.1
7
Emotion Recognition in ConversationMELD
F1 (Neutral)76.92
7
Multi-modal Sentiment Analysis Classification (MSAC)MELD
Neutral Accuracy0.8005
7
Multimodal semantics discoveryMELD-DA (test)
NMI23.22
6
Dialogue Act ClassificationMELD-DA (test)
Acc61.75
4
Empathetic response generationMELD
ROUGE-L9.9
3
Empathetic Dialogue GenerationMELD (test)
Accuracy21.89
3
Text ClassificationMELD (test)
Macro F149.51
2
Showing 25 of 25 rows