Detecting AI-Generated Forgeries via Iterative Manifold Deviation Amplification
About
The proliferation of highly realistic AI-generated images poses critical challenges for digital forensics, demanding precise pixel-level localization of manipulated regions. Existing methods predominantly learn discriminative patterns of specific forgeries and often struggle with novel manipulations as editing techniques continue to evolve. We propose the Iterative Forgery Amplifier Network (IFA-Net), which shifts from learning "what is fake" to modeling "what is real". Grounded in the principle that all manipulations deviate from the natural image manifold, IFA-Net leverages a frozen Masked Autoencoder (MAE) pretrained on real images as a universal realness prior. Our framework operates through a two-stage closed-loop process: an initial Dual-Stream Segmentation Network (DSSN) fuses the original image with MAE reconstruction residuals for coarse localization, followed by a Task-Adaptive Prior Injection (TAPI) module that converts this coarse prediction into guiding prompts to steer the MAE decoder and amplify reconstruction failures in suspicious regions for precise refinement. Extensive experiments on four diffusion-based inpainting benchmarks show that IFA-Net achieves an average improvement of 6.5% in IoU and 8.1% in F1-score over the second-best method, while demonstrating strong generalization to traditional manipulation types.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Manipulation Localization | CocoGlide (test) | F1 Score94 | 18 | |
| Image Forgery Localization | NIST16 (test) | F1 Score97.4 | 12 | |
| Generative Image Tampering Localization | OpenSDID (test) | IoU48.7 | 8 | |
| Generative Image Tampering Localization | GIT10K (test) | IoU92.8 | 8 | |
| Generative Image Tampering Localization | Inpaint32K (test) | IoU0.811 | 8 | |
| Traditional Tampering Localization | IMD 2020 (test) | mIoU39.2 | 8 | |
| Traditional Tampering Localization | CASIA (test) | IoU41.6 | 8 |