Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Analysis of adversarial attacks against CNN-based image forgery detectors

About

With the ubiquitous diffusion of social networks, images are becoming a dominant and powerful communication channel. Not surprisingly, they are also increasingly subject to manipulations aimed at distorting information and spreading fake news. In recent years, the scientific community has devoted major efforts to contrast this menace, and many image forgery detectors have been proposed. Currently, due to the success of deep learning in many multimedia processing tasks, there is high interest towards CNN-based detectors, and early results are already very promising. Recent studies in computer vision, however, have shown CNNs to be highly vulnerable to adversarial attacks, small perturbations of the input data which drive the network towards erroneous classification. In this paper we analyze the vulnerability of CNN-based image forensics methods to adversarial attacks, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.

Diego Gragnaniello, Francesco Marra, Giovanni Poggi, Luisa Verdoliva• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationFashionMNIST (test)--
260
Malicious Client DetectionFashion MNIST
Avg Malicious Clients Detected6
24
Showing 2 of 2 rows

Other info

Follow for update