Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations

About

Learning effective joint embedding for cross-modal data has always been a focus in the field of multimodal machine learning. We argue that during multimodal fusion, the generated multimodal embedding may be redundant, and the discriminative unimodal information may be ignored, which often interferes with accurate prediction and leads to a higher risk of overfitting. Moreover, unimodal representations also contain noisy information that negatively influences the learning of cross-modal dynamics. To this end, we introduce the multimodal information bottleneck (MIB), aiming to learn a powerful and sufficient multimodal representation that is free of redundancy and to filter out noisy information in unimodal representations. Specifically, inheriting from the general information bottleneck (IB), MIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target and simultaneously constraining the mutual information between the representation and the input data. Different from general IB, our MIB regularizes both the multimodal and unimodal representations, which is a comprehensive and flexible framework that is compatible with any fusion methods. We develop three MIB variants, namely, early-fusion MIB, late-fusion MIB, and complete MIB, to focus on different perspectives of information constraints. Experimental results suggest that the proposed method reaches state-of-the-art performance on the tasks of multimodal sentiment analysis and multimodal emotion recognition across three widely used datasets. The codes are available at \url{https://github.com/TmacMai/Multimodal-Information-Bottleneck}.

Sijie Mai, Ying Zeng, Haifeng Hu• 2022

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score86.8
332
Multimodal Sentiment AnalysisCMU-MOSI (test)
F187.8
316
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)87.8
144
Sentiment AnalysisCMU-MOSEI (test)
F1 Score78.8
96
Multimodal Sentiment AnalysisCMU-MOSI v1 (test)
Accuracy (2-Class)87.8
72
Multimodal Sentiment AnalysisMOSI (test)--
34
Multimodal Sentiment AnalysisCMU-MOSEI
A2 Score86.1
27
Emotion RecognitionCMU-MOSEI--
19
Feature AttributionMS-CXR text (test)
Conf. Drop (%)2.28
13
Multimodal regressionSuperconductivity (test)
RMSE15.18
13
Showing 10 of 16 rows

Other info

Follow for update