FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics
About
The rapid and unrestrained advancement of generative artificial intelligence (AI) presents a double-edged sword. While enabling unprecedented creativity, it also facilitates the generation of highly convincing content, undermining societal trust. As image generation techniques become increasingly sophisticated, detecting synthetic images is no longer just a binary task--it necessitates explainable methodologies to enhance trustworthiness and transparency. However, existing detection models primarily focus on classification, offering limited explanatory insights. To address these limitations, we propose FakeScope, an expert large multimodal model (LMM) tailored for AI-generated image forensics, which not only identifies synthetic images with high accuracy but also delivers rich query-contingent forensic insights. At the foundation of our approach is FakeChain, a large-scale dataset containing structured forensic reasoning based on visual trace evidence, constructed via a human-machine collaborative framework. Then we develop FakeInstruct, the largest multimodal instruction tuning dataset to date, comprising two million visual instructions that instill nuanced forensic awareness into LMMs. Empowered by FakeInstruct, FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios. It can accurately distinguish synthetic images, provide coherent explanations, discuss fine-grained forgery artifacts, and suggest actionable enhancement strategies. Notably, despite being trained exclusively on qualitative hard labels, FakeScope demonstrates remarkable zero-shot quantitative detection capability via our proposed token-based probability estimation strategy. Furthermore, it shows robust generalization across unseen image generators and performs reliably under in-the-wild scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Forensic Detection | FakeClass FakeBench (qualitative setting) | Detection Rate (Fake)99.5 | 15 | |
| AIGC Detection | WildRF (Reddit) | AP91.89 | 12 | |
| AIGC Detection | WildRF (FB) | AP85.31 | 12 | |
| AIGC Detection | WildRF (X) | AP92.96 | 12 | |
| AIGC Detection | SynthWildX Firefly | AP88.04 | 12 | |
| AIGC Detection | WildRF and SynthWildX Combined Average | mAP88.49 | 12 | |
| AIGC Detection | SynthWildX Dalle3 | AP89.02 | 12 | |
| AIGC Detection | SynthWildX (MJ) | AP83.72 | 12 | |
| Forensic Attribution | FakeBench MCQ | Yes/No Accuracy79.47 | 12 |