Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting GAN-generated Imagery using Color Cues

About

Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent work has shown that GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation, and show that the network's treatment of color is markedly different from a real camera in two ways. We further show that these two cues can be used to distinguish GAN-generated imagery from camera imagery, demonstrating effective discrimination between GAN imagery and real camera images used to train the GAN.

Scott McCloskey, Michael Albright• 2018

Related benchmarks

TaskDatasetResultRank
Model AttributionGM-CelebA (test)
Accuracy62.6
12
Model AttributionGM-CHQ (test)
Accuracy57.4
12
Model AttributionGM-FFHQ to GM-CelebA-HQ
Accuracy31.2
12
Model AttributionGM-CIFAR10 (test)
Accuracy40.223
12
Model AttributionGM-FFHQ (test)
Accuracy50.8
12
Model AttributionGM-CIFAR10 to GM-CelebA
Accuracy52.3
12
Model AttributionGM-CelebA to CIFAR10
Accuracy43.2
12
Model AttributionGM-CelebA-HQ to GM-FFHQ
Accuracy34.2
12
Showing 8 of 8 rows

Other info

Follow for update