Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unrestricted Adversarial Examples via Semantic Manipulation

About

Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their $\mathcal{L}_p$ norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors - color and texture - in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.

Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth• 2019

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (val)--
222
Adversarial AttackImageNet (test)
Success Rate93.3
101
Adversarial AttackImageNet-compatible Stable Diffusion context v1.4 (test)
ASR (MN-v2)99.9
38
Targeted Transfer AttackImageNet (val)
Attack Success Rate100
25
Adversarial AttackImageNet-Compatible
HGD Score12.2
11
Image Quality AssessmentImageNet (test)
NIMA Score (AVA)4.97
11
Black-box Adversarial AttackImageNet
Top-1 Accuracy (JPEG)30.6
7
Image Quality AssessmentImageNet
NIMA Technical Score4.718
7
Showing 8 of 8 rows

Other info

Follow for update