Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PDA: Text-Augmented Defense Framework for Robust Vision-Language Models against Adversarial Image Attacks

About

Vision-language models (VLMs) are vulnerable to adversarial image perturbations. Existing works based on adversarial training against task-specific adversarial examples are computationally expensive and often fail to generalize to unseen attack types. To address these limitations, we introduce Paraphrase-Decomposition-Aggregation (PDA), a training-free defense framework that leverages text augmentation to enhance VLM robustness under diverse adversarial image attacks. PDA performs prompt paraphrasing, question decomposition, and consistency aggregation entirely at test time, thus requiring no modification on the underlying models. To balance robustness and efficiency, we instantiate PDA as invariants that reduce the inference cost while retaining most of its robustness gains. Experiments on multiple VLM architectures and benchmarks for visual question answering, classification, and captioning show that PDA achieves consistent robustness gains against various adversarial perturbations while maintaining competitive clean accuracy, establishing a generic, strong and practical defense framework for VLMs during inference.

Jingning Xu, Haochen Luo, Chen Liu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2--
1362
Image ClassificationImageNet-D
Top-1 Accuracy79.8
36
Image CaptioningMS-COCO
CLIPScore0.862
36
Visual Question AnsweringVQA v2
Robust Accuracy90
12
Showing 4 of 4 rows

Other info

Follow for update