Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Combating Adversaries with Anti-Adversaries

About

Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input perturbation in the opposite direction of the adversarial one and feeds the classifier a perturbed version of the input. Our approach is training-free and theoretically supported. We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models and conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and ImageNet. Our layer significantly enhances model robustness while coming at no cost on clean accuracy.

Motasem Alfarra, Juan C. P\'erez, Ali Thabet, Adel Bibi, Philip H. S. Torr, Bernard Ghanem• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationFlowers102
Clean Accuracy82.4
49
Image ClassificationStanfordCars
Clean Accuracy76.8
40
ClassificationPCAM
Clean Accuracy50.2
39
Image ClassificationCIFAR10
Clean Accuracy89.3
37
ClassificationFGVCAircraft
Robust Accuracy10.7
30
Image ClassificationOxfordPets
Robust Accuracy61.1
27
Image ClassificationCIFAR100
Clean Accuracy64.7
27
Image ClassificationFood101
Clean Accuracy87.7
25
Image ClassificationCaltech-256
Clean Accuracy88
20
Image ClassificationGeneral-ImageNet
Clean Accuracy82.5
20
Showing 10 of 58 rows

Other info

Follow for update