Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What makes fake images detectable? Understanding properties that generalize

About

The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable. We further show a technique to exaggerate these detectable properties and demonstrate that, even when the image generator is adversarially finetuned against a fake image classifier, it is still imperfect and leaves detectable artifacts in certain image patches. Code is available at https://chail.github.io/patch-forensics/.

Lucy Chai, David Bau, Ser-Nam Lim, Phillip Isola• 2020

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionDFDC
AUC65.6
135
Generated Image DetectionGenImage (test)
Average Accuracy68.7
103
Deepfake DetectionDFDC (test)
AUC65.6
87
Deepfake DetectionDFD
AUC0.4991
77
Fake Face DetectionCeleb-DF v2 (test)
AUC99.96
50
Deepfake DetectionUniversalFakeDetect 1.0 (test)
Accuracy (ProGAN)98.86
42
Deepfake DetectionCelebDF v2
AUC0.696
40
Face Forgery DetectionFaceForensics++ (test)
AUC (DF)94
34
Deepfake DetectionFF++
AUC73.75
34
Synthetic Image DetectionForenSynths (test)
Mean Accuracy80.7
31
Showing 10 of 73 rows
...

Other info

Follow for update