Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Countering Adversarial Images using Input Transformations

About

This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods

Chuan Guo, Mayank Rana, Moustapha Cisse, Laurens van der Maaten• 2017

Related benchmarks

TaskDatasetResultRank
Adversarial RobustnessCIFAR-10 L-infinity, epsilon=4/255
Robust Accuracy (AA)22.3
10
Multi-task Driving Scene Understanding RobustnessThe Dolphins (Lvl. 0)
Final Score43.2
8
Multi-task Driving Scene Understanding RobustnessThe Dolphins Lvl. 1
Final Score42.85
8
Multi-task Driving Scene Understanding RobustnessThe Dolphins (Lvl. 2)
Final Score41.5
8
Multi-task Driving Scene Understanding RobustnessThe Dolphins (Lvl. 3)
Final Score41.1
8
Multi-task Driving Scene Understanding RobustnessThe Dolphins (Lvl. 4)
Final Score39.25
8
Showing 6 of 6 rows

Other info

Follow for update