Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Large-Scale Multi-Modal Classification

About

While the incipient internet was largely text-based, the modern digital world is becoming increasingly multi-modal. Here, we examine multi-modal classification where one modality is discrete, e.g. text, and the other is continuous, e.g. visual representations transferred from a convolutional neural network. In particular, we focus on scenarios where we have to be able to classify large quantities of data quickly. We investigate various methods for performing multi-modal fusion and analyze their trade-offs in terms of classification accuracy and computational efficiency. Our findings indicate that the inclusion of continuous information improves performance over text-only on a range of multi-modal classification tasks, even with simple fusion methods. In addition, we experiment with discretizing the continuous features in order to speed up and simplify the fusion process even further. Our results show that fusion with discretized features outperforms text-only classification, at a fraction of the computational cost of full multi-modal fusion, with the additional benefit of improved interpretability.

D. Kiela, E. Grave, A. Joulin, T. Mikolov• 2018

Related benchmarks

TaskDatasetResultRank
Multimodal Emotion RecognitionIEMOCAP (test)
Accuracy73.34
162
Audio-Image-Text ClassificationIEMOCAP (test)
Accuracy73.34
116
Multimodal Multilabel ClassificationMM-IMDB (test)--
87
Audio-Visual ClassificationCREMA-D (test)
Accuracy59.21
60
Multimodal ClassificationKS (test)
Accuracy63.72
48
Multimodal ClassificationMVSA (test)
Accuracy (%)75.94
48
Multimodal Multiclass ClassificationFood-101 (test)
Accuracy90.8
45
Multimodal ClassificationKinetics-Sounds (test)
Multimodal Accuracy49.1
30
Multimodal ClassificationAVE (test)
Multi Acc59.3
25
Multimodal ClassificationCREMA-D (test)
Multi Accuracy61.6
25
Showing 10 of 13 rows

Other info

Follow for update