Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Few-Shot Adversarial Prompt Learning on Vision-Language Models

About

The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting input sequences with limited data makes significant adversarial robustness improvement. Specifically, we achieve this by providing adversarially correlated text supervision that is end-to-end learned from adversarial examples. We also propose a novel training objective that enhances the consistency of multi-modal features while encourages differentiated uni-modal features between natural and adversarial examples. The proposed framework gives access to learn adversarial text supervision, which provides superior cross-modal adversarial alignment and matches state-of-the-art zero-shot adversarial robustness with only 1% training data. Code is available at: https://github.com/lionel-w2/FAP.

Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationEuroSAT
Accuracy12.7
497
Image ClassificationFlowers102
Accuracy22.52
478
Image ClassificationDTD
Accuracy15.94
419
Image ClassificationUCF101
Top-1 Acc16.41
404
Image ClassificationFood101
Accuracy8.83
309
Image ClassificationStanfordCars
Accuracy4.95
266
Image ClassificationSUN397
Accuracy15.9
246
Image ClassificationFGVCAircraft
Accuracy2.4
225
Image ClassificationImageNet-1K
Accuracy13.95
190
Image ClassificationCaltech101
Accuracy61.17
162
Showing 10 of 25 rows

Other info

Code

Follow for update