Beyond Semantic Priors: Mitigating Optimization Collapse for Generalizable Visual Forensics
About
While Vision-Language Models (VLMs) like CLIP have emerged as a dominant paradigm for generalizable deepfake detection, a representational disconnect remains: their semantic-centric pre-training is ill-suited for capturing non-semantic artifacts inherent to hyper-realistic synthesis. In this work, we identify a failure mode termed Optimization Collapse, where detectors trained with Sharpness-Aware Minimization (SAM) degenerate to random guessing on non-semantic forgeries once the perturbation radius exceeds a narrow threshold. To theoretically formalize this collapse, we propose the Critical Optimization Radius (COR) to quantify the geometric stability of the optimization landscape, and leverage the Gradient Signal-to-Noise Ratio (GSNR) to measure generalization potential. We establish a theorem proving that COR increases monotonically with GSNR, thereby revealing that the geometric instability of SAM optimization originates from degraded intrinsic generalization potential. This result identifies the layer-wise attenuation of GSNR as the root cause of Optimization Collapse in detecting non-semantic forgeries. Although naively reducing perturbation radius yields stable convergence under SAM, it merely treats the symptom without mitigating the intrinsic generalization degradation, necessitating enhanced gradient fidelity. Building on this insight, we propose the Contrastive Regional Injection Transformer (CoRIT), which integrates a computationally efficient Contrastive Gradient Proxy (CGP) with three training-free strategies: Region Refinement Mask to suppress CGP variance, Regional Signal Injection to preserve CGP magnitude, and Hierarchical Representation Integration to attain more generalizable representations. Extensive experiments demonstrate that CoRIT mitigates optimization collapse and achieves state-of-the-art generalization across cross-domain and universal forgery benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Face Forgery Detection | DFDCP | Frame-level AUC88.8 | 64 | |
| Face Forgery Detection | DFDC | -- | 52 | |
| AI-generated image detection | UniversalFakeDetect | Pro-GAN Accuracy99.99 | 32 | |
| Face Forgery Detection | CDF v2 (test) | Frame-Level AUC89.1 | 23 | |
| Face Forgery Detection | DFD (test) | Frame-Level AUC91.3 | 21 | |
| Face Forgery Detection | DFDC (test) | Frame-Level AUC84.5 | 19 | |
| Face Forgery Detection | CDF v1 (test) | Frame-Level AUC90.9 | 19 | |
| Face Forgery Detection | DFDCP | Video-level AUC0.913 | 15 | |
| Face Forgery Detection | CDF v2 | Video-Level AUC94.1 | 12 | |
| Face Forgery Detection | DF40 | Uniface Score91.8 | 11 |