Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Adversarial Training with Transferable Adversarial Examples

About

Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training. In this paper, we first show that there is high transferability between models from neighboring epochs in the same training process, i.e., adversarial examples from one epoch continue to be adversarial in subsequent epochs. Leveraging this property, we propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA), that can enhance the robustness of trained models and greatly improve the training efficiency by accumulating adversarial perturbations through epochs. Compared to state-of-the-art adversarial training methods, ATTA enhances adversarial accuracy by up to 7.2% on CIFAR10 and requires 12~14x less training time on MNIST and CIFAR10 datasets with comparable model robustness.

Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationMAD-M
CCA35.67
5
Image ClassificationMAD-C
CCA10.2
5
Adversarial DefenseMAD-M (Learned)
DSR36.8
4
Adversarial DefenseMAD-M (New)
DSR27.1
4
Adversarial DefenseMAD-C (Learned)
Defense Success Rate (DSR)11.7
4
Adversarial DefenseMAD-C (New)
DSR10.4
4
Showing 6 of 6 rows

Other info

Follow for update