Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Retrieval-Augmented Dynamic Prompt Tuning for Incomplete Multimodal Learning

About

Multimodal learning with incomplete modality is practical and challenging. Recently, researchers have focused on enhancing the robustness of pre-trained MultiModal Transformers (MMTs) under missing modality conditions by applying learnable prompts. However, these prompt-based methods face several limitations: (1) incomplete modalities provide restricted modal cues for task-specific inference, (2) dummy imputation for missing content causes information loss and introduces noise, and (3) static prompts are instance-agnostic, offering limited knowledge for instances with various missing conditions. To address these issues, we propose RAGPT, a novel Retrieval-AuGmented dynamic Prompt Tuning framework. RAGPT comprises three modules: (I) the multi-channel retriever, which identifies similar instances through a within-modality retrieval strategy, (II) the missing modality generator, which recovers missing information using retrieved contexts, and (III) the context-aware prompter, which captures contextual knowledge from relevant instances and generates dynamic prompts to largely enhance the MMT's robustness. Extensive experiments conducted on three real-world datasets show that RAGPT consistently outperforms all competitive baselines in handling incomplete modality problems. The code of our work and prompt-based baselines is available at https://github.com/Jian-Lang/RAGPT.

Jian Lang, Zhangtao Cheng, Ting Zhong, Fan Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Multilabel ClassificationMM-IMDB (test)
Macro F154.33
87
Image ClassificationFood101 (test)
Accuracy82.42
87
Multi-modal hate speech detectionMMHS11K (test)
Accuracy76.93
21
Multimodal ClassificationN24News (test)
Accuracy61.18
21
Showing 4 of 4 rows

Other info

Follow for update