Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CNN-generated images are surprisingly easy to spot... for now

About

In this work we ask whether it is possible to create a "universal" detector for telling apart real images from these generated by a CNN, regardless of architecture or dataset used. To test this, we collect a dataset consisting of fake images generated by 11 different CNN-based image generator models, chosen to span the space of commonly used architectures today (ProGAN, StyleGAN, BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, cascaded refinement networks, implicit maximum likelihood estimation, second-order attention super-resolution, seeing-in-the-dark). We demonstrate that, with careful pre- and post-processing and data augmentation, a standard image classifier trained on only one specific CNN generator (ProGAN) is able to generalize surprisingly well to unseen architectures, datasets, and training methods (including the just released StyleGAN2). Our findings suggest the intriguing possibility that today's CNN-generated images share some common systematic flaws, preventing them from achieving realistic image synthesis. Code and pre-trained networks are available at https://peterwang512.github.io/CNNDetection/ .

Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A. Efros• 2019

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionDFDC
AUC72.1
150
Generated Image DetectionGenImage (test)
Average Accuracy74.8
124
AI-generated image detectionChameleon
Accuracy65.7
107
AI-generated image detectionGenImage
Midjourney Detection Rate52.8
106
Deepfake DetectionDFD
AUC0.6012
91
AI-generated image detectionChameleon (test)
Accuracy60.89
74
Deepfake DetectionCelebDF v2
AUC0.756
57
Deepfake DetectionCelebDF (CDF) v2 (test)
AUC75.6
52
Face Forgery DetectionDFDC--
52
AI Image DetectionMidjourney
Accuracy52.59
51
Showing 10 of 320 rows
...

Other info

Follow for update