Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

About

Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84% under Auto-Attack on the CIFAR-10 dataset, and can even outperform vanilla supervised adversarial training for the first time. Our code is available at \url{https://github.com/PKU-ML/DYNACL}.

Rundong Luo, Yifei Wang, Yisen Wang• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationSTL-10 (test)--
357
Image ClassificationCIFAR-10-C (test)--
61
Image ClassificationCIFAR-100-C v1 (test)--
60
Image ClassificationCIFAR-100 pre-trained on CIFAR-10 (test)
AA47.4
24
Image ClassificationSTL-10 pre-trained on CIFAR-10 (test)--
22
Image ClassificationCIFAR-10 (test)
AA50.52
12
Image ClassificationSTL-10 pre-trained on CIFAR-100 (test)
Average Accuracy31.17
12
Image ClassificationCIFAR-10-C CS-1 1.0 (test)
Mean Accuracy79.77
12
Image ClassificationCIFAR-10-C CS-5 1.0 (test)
Mean Accuracy65.6
12
Image ClassificationCIFAR-100 (test)
Average Accuracy (AA)24.7
12
Showing 10 of 15 rows

Other info

Follow for update