Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adversarial Patch

About

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. To reproduce the results from the paper, our code is available at https://github.com/tensorflow/cleverhans/tree/master/examples/adversarial_patch

Tom B. Brown, Dandelion Man\'e, Aurko Roy, Mart\'in Abadi, Justin Gilmer• 2017

Related benchmarks

TaskDatasetResultRank
Untargeted AttackTongji (test)
ASR100
56
Untargeted Adversarial AttackAISEC (test)
ASR97.9
56
Targeted Adversarial AttackAISEC
ASR79.87
56
Targeted AttackTongji (test)
ASR99.51
56
Untargeted AttackIITD
Attack Success Rate78.61
56
Targeted AttackIITD (test)
ASR20.2
56
Object DetectionSAR-Ship-Dataset (test)
mAP5064
5
Showing 7 of 7 rows

Other info

Follow for update