Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors

About

Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.

Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency• 2018

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSI
MAE0.915
59
Sentiment AnalysisCMU-MOSEI (test)
Acc (2-class)79.1
40
Multimodal Emotion RecognitionCMU-MOSI
ACC733.2
31
Multimodal Emotion RecognitionCMU-MOSEI (test)
ACC70.5
30
Sentiment AnalysisCMU-MOSI
Accuracy (2-class)78
21
Multimodal Emotion RecognitionCMU-MOSI (test)
ACC733.2
21
Multimodal Sentiment AnalysisCMU-MOSI Word Aligned (test)
Accuracy (7-Class)33.2
21
Emotion RecognitionCMU-MOSEI (test)
Accuracy (7)50
19
Emotion RecognitionCMU-MOSEI
F1 Score79.5
19
Multimodal Sentiment AnalysisCMU-MOSEI Unaligned (test)
Accuracy (2-Class)79.3
18
Showing 10 of 17 rows

Other info

Follow for update