Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explaining and Harnessing Adversarial Examples

About

Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.

Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy• 2014

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy73.2
3518
Image ClassificationCIFAR-10 (test)
Accuracy93.1
3381
Image ClassificationTinyImageNet (test)
Accuracy62.4
366
Image ClassificationSVHN (test)
Accuracy95.4
362
Adversarial AttackImageNet (val)--
222
Cross-modality Person Re-identificationSYSU-MM01 (Indoor Search)
Rank-153.87
114
Image ClassificationCIFAR-10 (test)
Natural Accuracy84.26
48
Adversarial Transfer AttackTraffic
Degradation % (MSE)-69
45
Adversarial Transfer AttackECL
MSE Degradation (%)1.62
45
Adversarial Transfer AttackETT
MSE Degradation (%)0.0271
45
Showing 10 of 61 rows

Other info

Follow for update