Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

About

We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image. We show that it is possible for adversaries to generate localized \textit{adversarial scratches} that cover less than $5\%$ of the pixels in an image and achieve targeted success rates of $98.77\%$ and $97.20\%$ on ImageNet and CIFAR-10 trained ResNet-50 models, respectively. We demonstrate that our scratches are effective under diverse shapes, such as straight lines or parabolic B\a'ezier curves, with single or multiple colors. In an extreme condition, in which our scratches are a single color, we obtain a targeted attack success rate of $66\%$ on CIFAR-10 with an order of magnitude fewer queries than comparable attacks. We successfully launch our attack against Microsoft's Cognitive Services Image Captioning API and propose various mitigation strategies.

Malhar Jere, Loris Rossi, Briland Hitaj, Gabriela Ciocarlie, Giacomo Boracchi, Farinaz Koushanfar• 2019

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (val)
ASR (General)6.53e+3
222
Showing 1 of 1 rows

Other info

Follow for update