Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Integrating Multimodal Information in Large Pretrained Transformers

About

Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream. While fine-tuning these pre-trained models is straightforward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face communication). Pre-trained models don't have the necessary components to accept two extra modalities of vision and acoustic. In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG). MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning. It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities. In our experiments, we study the commonly used CMU-MOSI and CMU-MOSEI datasets for multimodal sentiment analysis. Fine-tuning MAG-BERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only fine-tuning of BERT and XLNet. On the CMU-MOSI dataset, MAG-XLNet achieves human-level multimodal sentiment analysis performance for the first time in the NLP community.

Wasifur Rahman, Md. Kamrul Hasan, Sangwu Lee, Amir Zadeh, Chengfeng Mao, Louis-Philippe Morency, Ehsan Hoque• 2019

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score84.7
332
Multimodal Sentiment AnalysisCMU-MOSI (test)
F187.9
316
Multimodal Sentiment AnalysisMOSEI
MAE0.583
168
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)83.36
144
Alzheimer stage classificationADNI
AUC73.14
116
Multimodal Sentiment AnalysisCH-SIMS (test)
F1 Score71.75
108
Mortality PredictionMIMIC IV
Accuracy62.87
88
Multimodal Sentiment AnalysisSIMS (test)
Accuracy (2-Class)71.43
78
Mortality PredictionMIMIC-IV (test)
AUC61.99
55
Multimodal Sentiment AnalysisMOSEI (test)
MAE0.539
49
Showing 10 of 21 rows

Other info

Follow for update