Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Large-Scale Multi-Modal Classification

About

While the incipient internet was largely text-based, the modern digital world is becoming increasingly multi-modal. Here, we examine multi-modal classification where one modality is discrete, e.g. text, and the other is continuous, e.g. visual representations transferred from a convolutional neural network. In particular, we focus on scenarios where we have to be able to classify large quantities of data quickly. We investigate various methods for performing multi-modal fusion and analyze their trade-offs in terms of classification accuracy and computational efficiency. Our findings indicate that the inclusion of continuous information improves performance over text-only on a range of multi-modal classification tasks, even with simple fusion methods. In addition, we experiment with discretizing the continuous features in order to speed up and simplify the fusion process even further. Our results show that fusion with discretized features outperforms text-only classification, at a fraction of the computational cost of full multi-modal fusion, with the additional benefit of improved interpretability.

D. Kiela, E. Grave, A. Joulin, T. Mikolov• 2018

Related benchmarks

TaskDatasetResultRank
Multimodal Emotion RecognitionIEMOCAP (test)
Accuracy73.34
118
Audio-Image-Text ClassificationIEMOCAP (test)
Accuracy73.34
116
Multimodal Multilabel ClassificationMM-IMDB (test)--
87
Audio-Visual ClassificationCREMA-D (test)
Accuracy59.21
60
Multimodal ClassificationKS (test)
Accuracy63.72
48
Multimodal ClassificationMVSA (test)
Accuracy (%)75.94
48
Multimodal Multiclass ClassificationFood-101 (test)
Accuracy90.8
45
Image-Text ClassificationFood-101 (test)
Accuracy88.87
24
Audio-Visual Event ClassificationVGGSound (test)
Fusion Top-1 Acc38.3
18
Multimodal ClassificationUCF101 (test)
Combined Accuracy46.7
14
Showing 10 of 13 rows

Other info

Follow for update