Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Community Forensics: Using Thousands of Generators to Train Fake Image Detectors

About

One of the key challenges of detecting AI-generated images is spotting images that have been created by previously unseen generative models. We argue that the limited diversity of the training data is a major obstacle to addressing this problem, and we propose a new dataset that is significantly larger and more diverse than prior work. As part of creating this dataset, we systematically download thousands of text-to-image latent diffusion models and sample images from them. We also collect images from dozens of popular open source and commercial models. The resulting dataset contains 2.7M images that have been sampled from 4803 different models. These images collectively capture a wide range of scene content, generator architectures, and image processing settings. Using this dataset, we study the generalization abilities of fake image detectors. Our experiments suggest that detection performance improves as the number of models in the training set increases, even when these models have similar architectures. We also find that detection performance improves as the diversity of the models increases, and that our trained detectors generalize better than those trained on other datasets. The dataset can be found in https://jespark.net/projects/2024/community_forensics

Jeongsoo Park, Andrew Owens• 2024

Related benchmarks

TaskDatasetResultRank
Generated Image DetectionGenImage (test)
Average Accuracy84
103
AI-generated image detectionChameleon (test)
Accuracy77.5
54
Synthetic Image DetectionForenSynths (test)
Mean Accuracy92.3
31
AI-generated image detectionCommFor (Community-Forensics) (test)
Acc86.8
12
AI-generated image detectionSynthBuster (test)
Accuracy87
12
AI-generated image detectionUFD (UniversalFakeDetect) (test)
Accuracy94
12
AI-generated image detectionSo-Fake OOD
Flux.1 pro Accuracy59.37
8
Showing 7 of 7 rows

Other info

Follow for update