Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BadCLIP++: Stealthy and Persistent Backdoors in Multimodal Contrastive Learning

About

Research on backdoor attacks against multimodal contrastive learning models faces two key challenges: stealthiness and persistence. Existing methods often fail under strong detection or continuous fine-tuning, largely due to (1) cross-modal inconsistency that exposes trigger patterns and (2) gradient dilution at low poisoning rates that accelerates backdoor forgetting. These coupled causes remain insufficiently modeled and addressed. We propose BadCLIP++, a unified framework that tackles both challenges. For stealthiness, we introduce a semantic-fusion QR micro-trigger that embeds imperceptible patterns near task-relevant regions, preserving clean-data statistics while producing compact trigger distributions. We further apply target-aligned subset selection to strengthen signals at low injection rates. For persistence, we stabilize trigger embeddings via radius shrinkage and centroid alignment, and stabilize model parameters through curvature control and elastic weight consolidation, maintaining solutions within a low-curvature wide basin resistant to fine-tuning. We also provide the first theoretical analysis showing that, within a trust region, gradients from clean fine-tuning and backdoor objectives are co-directional, yielding a non-increasing upper bound on attack success degradation. Experiments demonstrate that with only 0.3% poisoning, BadCLIP++ achieves 99.99% attack success rate (ASR) in digital settings, surpassing baselines by 11.4 points. Across nineteen defenses, ASR remains above 99.90% with less than 0.8% drop in clean accuracy. The method further attains 65.03% success in physical attacks and shows robustness against watermark removal defenses.

Siyuan Liang, Yongcheng Jing, Yingjie Wang, Jiaxing Huang, Ee-chien Chang, Dacheng Tao• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet V2 (test)--
181
Image ClassificationImageNet-A (test)--
154
Image ClassificationImageNet-Sketch (test)--
132
Image-Text RetrievalCOCO (test)
Recall@139.01
37
Image ClassificationImageNet In-Distribution (test)
ID Accuracy58.92
23
Image ClassificationZero-shot evaluation
CA58.92
14
Image ClassificationCIFAR-10
Clean Accuracy (CA)87.57
14
Image ClassificationImageNet 1k (test)
CA (Accuracy)58.92
14
Text-Image RetrievalSBU (test)
R@136.28
14
Image ClassificationOxford-IIIT Pet
CA85.28
14
Showing 10 of 16 rows

Other info

Follow for update