Hierarchical Refinement of Universal Multimodal Attacks on Vision-Language Models
About
Existing adversarial attacks for VLP models are mostly sample-specific, resulting in substantial computational overhead when scaled to large datasets or new scenarios. To overcome this limitation, we propose Hierarchical Refinement Attack (HRA), a multimodal universal attack framework for VLP models. For the image modality, we refine the optimization path by leveraging a temporal hierarchy of historical and estimated future gradients to avoid local minima and stabilize universal perturbation learning. For the text modality, it hierarchically models textual importance by considering both intra- and inter-sentence contributions to identify globally influential words, which are then used as universal text perturbations. Extensive experiments across various downstream tasks, VLP models, and datasets, demonstrate the superior transferability of the proposed universal multimodal attacks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@166.46 | 460 | |
| Image-to-Text Retrieval | Flickr30K | R@146.51 | 379 | |
| Visual Grounding | RefCOCO+ (val) | Accuracy34.21 | 171 | |
| Visual Grounding | RefCOCO+ (testB) | Accuracy31.85 | 169 | |
| Visual Grounding | RefCOCO+ (testA) | Accuracy34.93 | 168 | |
| Image Captioning | MSCOCO (test) | CIDEr108.2 | 29 |