Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models

About

While adversarial training has been extensively studied for ResNet architectures and low resolution datasets like CIFAR, much less is known for ImageNet. Given the recent debate about whether transformers are more robust than convnets, we revisit adversarial training on ImageNet comparing ViTs and ConvNeXts. Extensive experiments show that minor changes in architecture, most notably replacing PatchStem with ConvStem, and training scheme have a significant impact on the achieved robustness. These changes not only increase robustness in the seen $\ell_\infty$-threat model, but even more so improve generalization to unseen $\ell_1/\ell_2$-attacks. Our modified ConvNeXt, ConvNeXt + ConvStem, yields the most robust $\ell_\infty$-models across different ranges of model parameters and FLOPs, while our ViT + ConvStem yields the best generalization to unseen threat models.

Naman D Singh, Francesco Croce, Matthias Hein• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1469
Image ClassificationCIFAR-100--
116
Image ClassificationImageNet RobustBench (val)
Clean Accuracy76.3
36
Adversarial AttackImageNet
Parsimon31.16
19
Adversarial AttackImageNet
Parsimon35.5
19
Image ClassificationImageNet-1k 1.0 (test)
Accuracy (Clean)78.2
17
Generative ModelingImageNet 256x256
FID44.46
15
Image ClassificationImageNet 1k (test)
Clean Accuracy77
14
Image ClassificationImageNet
Standard Accuracy77
11
ClassificationImageNet 256x256
Accuracy (%)78.25
9
Showing 10 of 13 rows

Other info

Code

Follow for update