Proxy Robustness in Vision Language Models is Effortlessly Transferable
About
As a pivotal technique for improving the defense of deep models, adversarial robustness transfer via distillation has demonstrated remarkable success in conventional image classification tasks. However, this paradigm encounters critical challenges when applied to vision-language models (VLM) (e.g., CLIP): constructing adversarially robust teacher for large-scale multi-modal models demands prohibitively high computational resources. We bridge this gap by revealing an interesting phenomenon: vanilla CLIP (without adversarial training) exhibits intrinsic defensive capabilities against adversarial examples generated by another CLIP with different architectures. We formally define this as proxy adversarial robustness, and naturally propose a Heterogeneous Proxy Transfer (HPT) framework that establishes cross-architectural robustness distillation channels between CLIP variants, effortlessly enabling the VLM robustness transfer from proxy to target models. Yet, such proxy transfer paradigm easily induces severe overfitting, leading to a sharp degradation in zero-shot natural generalization. To resolve that, we design Generalization-Pivot Decoupling (GPD) by leveraging the difference in learning rate scheduling. This decouples the proxy transfer process into a generalization-anchored warm-up that maintains generalization and a generalization-pulled HPT that promotes adversarial robustness, to achieve an equilibrium between natural generalization and adversarial robustness. Extensive experiments on 15 zero-shot datasets demonstrate the effectiveness of our HPT-GPD method. The code is available at the website of github.com/fxw13/HPT-GPD.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | Caltech256 | Accuracy (Clean)79.63 | 51 | |
| Image Classification | Flowers102 | Clean Accuracy50.11 | 49 | |
| Image Classification | StanfordCars | Clean Accuracy45 | 40 | |
| Classification | PCAM | -- | 39 | |
| Image Classification | CIFAR10 | Clean Accuracy90.74 | 37 | |
| Image Classification | OxfordPets | Robust Accuracy10.06 | 27 | |
| Image Classification | CIFAR100 | Clean Accuracy64.83 | 27 | |
| Image Classification | Food101 | Clean Accuracy75.05 | 25 | |
| Image Classification | Caltech101 | Clean Accuracy22.87 | 15 | |
| Image Classification | SUN397 | Robust Accuracy4.55 | 14 |