Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion

About

As machine learning models in critical fields increasingly grapple with multimodal data, they face the dual challenges of handling a wide array of modalities, often incomplete due to missing elements, and the temporal irregularity and sparsity of collected samples. Successfully leveraging this complex data, while overcoming the scarcity of high-quality training samples, is key to improving these models' predictive performance. We introduce ``FuseMoE'', a mixture-of-experts framework incorporated with an innovative gating function. Designed to integrate a diverse number of modalities, FuseMoE is effective in managing scenarios with missing modalities and irregularly sampled data trajectories. Theoretically, our unique gating function contributes to enhanced convergence rates, leading to better performance in multiple downstream tasks. The practical utility of FuseMoE in the real world is validated by a diverse set of challenging prediction tasks.

Xing Han, Huy Nguyen, Carl Harris, Nhat Ho, Suchi Saria• 2024

Related benchmarks

TaskDatasetResultRank
Alzheimer stage classificationADNI
AUC72.37
116
Human Activity RecognitionREALDISP
F196.69
94
Human Activity RecognitionDailySport
F1 Score90.13
78
Human Activity RecognitionUP-Fall
F1 Score88.69
78
Mortality PredictionMIMIC-IV (test)
AUC62.52
43
Human Activity RecognitionCMDFall
F1 Score76.22
36
Mortality PredictionMIMIC IV
Accuracy64.77
24
Showing 7 of 7 rows

Other info

Follow for update