Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning

About

Multimodal contrastive pretraining has been used to train multimodal representation models, such as CLIP, on large amounts of paired image-text data. However, previous studies have revealed that such models are vulnerable to backdoor attacks. Specifically, when trained on backdoored examples, CLIP learns spurious correlations between the embedded backdoor trigger and the target label, aligning their representations in the joint embedding space. Injecting even a small number of poisoned examples, such as 75 examples in 3 million pretraining data, can significantly manipulate the model's behavior, making it difficult to detect or unlearn such correlations. To address this issue, we propose CleanCLIP, a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by independently re-aligning the representations for individual modalities. We demonstrate that unsupervised finetuning using a combination of multimodal contrastive and unimodal self-supervised objectives for individual modalities can significantly reduce the impact of the backdoor attack. Additionally, we show that supervised finetuning on task-specific labeled image data removes the backdoor trigger from the CLIP vision encoder. We show empirically that CleanCLIP maintains model performance on benign examples while erasing a range of backdoor attacks on multimodal contrastive learning. The code and checkpoints are available at https://github.com/nishadsinghi/CleanCLIP.

Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang• 2023

Related benchmarks

TaskDatasetResultRank
ClassificationImageNet standard (test)
Clean Accuracy53.8
31
Text RetrievalCOCO 5k points (val)
Clean Accuracy70.4
31
Text RetrievalCOCO
Clean Accuracy70.6
21
ClassificationImageNet
Clean Accuracy55.2
21
Machine-generated text detectionCombined Drift (test)
Accuracy78.5
6
ClassificationImageNet BadNet-Stripes
Clean Accuracy68.7
3
ClassificationImageNet Blended-Text
Clean Accuracy68.9
3
ClassificationCOCO BadNet-Stripes
Clean Accuracy78.8
3
ClassificationCOCO Blended-Text
Clean Accuracy78.7
3
Showing 9 of 9 rows

Other info

Follow for update