Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Provable Dynamic Fusion for Low-Quality Multimodal Data

About

The inherent challenge of multimodal fusion is to precisely capture the cross-modal correlation and flexibly conduct cross-modal interaction. To fully release the value of each modality and mitigate the influence of low-quality multimodal data, dynamic multimodal fusion emerges as a promising learning paradigm. Despite its widespread use, theoretical justifications in this field are still notably lacking. Can we design a provably robust multimodal fusion method? This paper provides theoretical understandings to answer this question under a most popular multimodal fusion framework from the generalization perspective. We proceed to reveal that several uncertainty estimation solutions are naturally available to achieve robust multimodal fusion. Then a novel multimodal fusion framework termed Quality-aware Multimodal Fusion (QMF) is proposed, which can improve the performance in terms of classification accuracy and model robustness. Extensive experimental results on multiple benchmarks can support our findings.

Qingyang Zhang, Haitao Wu, Changqing Zhang, Qinghua Hu, Huazhu Fu, Joey Tianyi Zhou, Xi Peng• 2023

Related benchmarks

TaskDatasetResultRank
Multimodal Emotion RecognitionIEMOCAP (test)
Accuracy76.17
118
Audio-Image-Text ClassificationIEMOCAP (test)
Accuracy76.17
116
Emotion RecognitionIEMOCAP
Accuracy72.08
71
Audio-Visual ClassificationCREMA-D (test)
Accuracy63.71
60
Multimodal ClassificationKS (test)
Accuracy65.78
48
Multimodal ClassificationMVSA (test)
Accuracy (%)77.96
48
Multimodal Multiclass ClassificationFood-101 (test)
Accuracy92.87
45
Multimodal ClassificationBRCA (train test)
Accuracy82.5
36
Multimodal ClassificationFOOD101 UPMC (train test)
Accuracy91.7
36
Multimodal ClassificationROSMAP (train test)
Accuracy78.3
36
Showing 10 of 23 rows

Other info

Follow for update