Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Addressing Missing and Noisy Modalities in One Solution: Unified Modality-Quality Framework for Low-quality Multimodal Data

About

Multimodal data encountered in real-world scenarios are typically of low quality, with noisy modalities and missing modalities being typical forms that severely hinder model performance and robustness. However, prior works often handle noisy and missing modalities separately. In contrast, we jointly address missing and noisy modalities to enhance model robustness in low-quality data scenarios. We regard both noisy and missing modalities as a unified low-quality modality problem, and propose a unified modality-quality (UMQ) framework to enhance low-quality representations for multimodal affective computing. Firstly, we train a quality estimator with explicit supervised signals via a rank-guided training strategy that compares the relative quality of different representations by adding a ranking constraint, avoiding training noise caused by inaccurate absolute quality labels. Then, a quality enhancer for each modality is constructed, which uses the sample-specific information provided by other modalities and the modality-specific information provided by the defined modality baseline representation to enhance the quality of unimodal representations. Finally, we propose a quality-aware mixture-of-experts module with particular routing mechanism to enable multiple modality-quality problems to be addressed more specifically. UMQ consistently outperforms state-of-the-art baselines on multiple datasets under the settings of complete, missing, and noisy modalities.

Sijie Mai, Shiqin Han, Haifeng Hu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Sentiment AnalysisCMU-MOSEI (test)
F1 Score88.1
332
Multimodal Sentiment AnalysisCMU-MOSI (test)
F190
316
Multimodal Sentiment AnalysisCMU-MOSI
Accuracy (2-Class)90.1
144
Multimodal Sentiment AnalysisCH-SIMS (test)
F1 Score82.6
108
Multimodal Sentiment AnalysisCMU-MOSI v1 (test)
Accuracy (2-Class)89.2
72
Multimodal Sentiment AnalysisCMU-MOSEI
A2 Score87.3
27
Humor DetectionUR-FUNNY--
20
Multimodal Sarcasm DetectionMUSTARD
Accuracy80.6
6
Multimodal Sentiment AnalysisCMU-MOSEI v1 (test)
Accuracy (2-Class)87.3
6
Showing 9 of 9 rows

Other info

Follow for update