Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics

About

The rapid and unrestrained advancement of generative artificial intelligence (AI) presents a double-edged sword. While enabling unprecedented creativity, it also facilitates the generation of highly convincing content, undermining societal trust. As image generation techniques become increasingly sophisticated, detecting synthetic images is no longer just a binary task--it necessitates explainable methodologies to enhance trustworthiness and transparency. However, existing detection models primarily focus on classification, offering limited explanatory insights. To address these limitations, we propose FakeScope, an expert large multimodal model (LMM) tailored for AI-generated image forensics, which not only identifies synthetic images with high accuracy but also delivers rich query-contingent forensic insights. At the foundation of our approach is FakeChain, a large-scale dataset containing structured forensic reasoning based on visual trace evidence, constructed via a human-machine collaborative framework. Then we develop FakeInstruct, the largest multimodal instruction tuning dataset to date, comprising two million visual instructions that instill nuanced forensic awareness into LMMs. Empowered by FakeInstruct, FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios. It can accurately distinguish synthetic images, provide coherent explanations, discuss fine-grained forgery artifacts, and suggest actionable enhancement strategies. Notably, despite being trained exclusively on qualitative hard labels, FakeScope demonstrates remarkable zero-shot quantitative detection capability via our proposed token-based probability estimation strategy. Furthermore, it shows robust generalization across unseen image generators and performs reliably under in-the-wild scenarios.

Yixuan Li, Yu Tian, Yipo Huang, Wei Lu, Shiqi Wang, Weisi Lin, Anderson Rocha• 2025

Related benchmarks

TaskDatasetResultRank
Forensic DetectionFakeClass FakeBench (qualitative setting)
Detection Rate (Fake)99.5
15
AIGC DetectionWildRF (Reddit)
AP91.89
12
AIGC DetectionWildRF (FB)
AP85.31
12
AIGC DetectionWildRF (X)
AP92.96
12
AIGC DetectionSynthWildX Firefly
AP88.04
12
AIGC DetectionWildRF and SynthWildX Combined Average
mAP88.49
12
AIGC DetectionSynthWildX Dalle3
AP89.02
12
AIGC DetectionSynthWildX (MJ)
AP83.72
12
Forensic AttributionFakeBench MCQ
Yes/No Accuracy79.47
12
Showing 9 of 9 rows

Other info

Follow for update