Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AdvDrop: Adversarial Attack to DNNs by Dropping Information

About

Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e.g. cartoon. However, in terms of visual perception of Deep Neural Networks (DNNs), the ability for recognizing abstract objects (visual objects with lost information) is still a challenge. In this work, we investigate this issue from an adversarial viewpoint: will the performance of DNNs decrease even for the images only losing a little information? Towards this end, we propose a novel adversarial attack, named \textit{AdvDrop}, which crafts adversarial examples by dropping existing information of images. Previously, most adversarial attacks add extra disturbing information on clean images explicitly. Opposite to previous works, our proposed work explores the adversarial robustness of DNN models in a novel perspective by dropping imperceptible details to craft adversarial examples. We demonstrate the effectiveness of \textit{AdvDrop} by extensive experiments, and show that this new type of adversarial examples is more difficult to be defended by current defense systems.

Ranjie Duan, Yuefeng Chen, Dantong Niu, Yun Yang, A. K. Qin, Yuan He• 2021

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (test)--
101
Untargeted Adversarial AttackCIFAR-10 (test)
ASR99.92
57
Untargeted Adversarial AttackImageNet-1k (val)
ASR99.76
57
Untargeted white-box adversarial attackImageNet
ASR97.2
40
Targeted Adversarial AttackCIFAR-10 (test)--
12
Untargeted white-box attackTarget Model: Vgg-19
Latency (s)268
10
Untargeted white-box attackTarget Model: MobileNet-V2
Attack Time (s)116
10
Untargeted white-box attackTarget Model: WideResNet-50
Time (s)353
10
Untargeted Adversarial AttackCIFAR-100 (test)
ASR99.93
9
Targeted Adversarial AttackImageNet-1k (val)--
9
Showing 10 of 14 rows

Other info

Follow for update