Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Adversarial Perturbations

About

In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for both targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.

Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie• 2017

Related benchmarks

TaskDatasetResultRank
Multi-Label ClassificationPascal VOC (test)
Hamming Score76.24
112
Object DetectionMS-COCO (test)
AP42.4
81
Multi-Label ClassificationMS-COCO (test)--
24
Showing 3 of 3 rows

Other info

Follow for update