Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Semantic Priors: Mitigating Optimization Collapse for Generalizable Visual Forensics

About

While Vision-Language Models (VLMs) like CLIP have emerged as a dominant paradigm for generalizable deepfake detection, a representational disconnect remains: their semantic-centric pre-training is ill-suited for capturing non-semantic artifacts inherent to hyper-realistic synthesis. In this work, we identify a failure mode termed Optimization Collapse, where detectors trained with Sharpness-Aware Minimization (SAM) degenerate to random guessing on non-semantic forgeries once the perturbation radius exceeds a narrow threshold. To theoretically formalize this collapse, we propose the Critical Optimization Radius (COR) to quantify the geometric stability of the optimization landscape, and leverage the Gradient Signal-to-Noise Ratio (GSNR) to measure generalization potential. We establish a theorem proving that COR increases monotonically with GSNR, thereby revealing that the geometric instability of SAM optimization originates from degraded intrinsic generalization potential. This result identifies the layer-wise attenuation of GSNR as the root cause of Optimization Collapse in detecting non-semantic forgeries. Although naively reducing perturbation radius yields stable convergence under SAM, it merely treats the symptom without mitigating the intrinsic generalization degradation, necessitating enhanced gradient fidelity. Building on this insight, we propose the Contrastive Regional Injection Transformer (CoRIT), which integrates a computationally efficient Contrastive Gradient Proxy (CGP) with three training-free strategies: Region Refinement Mask to suppress CGP variance, Regional Signal Injection to preserve CGP magnitude, and Hierarchical Representation Integration to attain more generalizable representations. Extensive experiments demonstrate that CoRIT mitigates optimization collapse and achieves state-of-the-art generalization across cross-domain and universal forgery benchmarks.

Jipeng Liu, Haichao Shi, Siyu Xing, Rong Yin, Xiao-Yu Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Face Forgery DetectionDFDCP
Frame-level AUC88.8
64
Face Forgery DetectionDFDC--
52
AI-generated image detectionUniversalFakeDetect
Pro-GAN Accuracy99.99
32
Face Forgery DetectionCDF v2 (test)
Frame-Level AUC89.1
23
Face Forgery DetectionDFD (test)
Frame-Level AUC91.3
21
Face Forgery DetectionDFDC (test)
Frame-Level AUC84.5
19
Face Forgery DetectionCDF v1 (test)
Frame-Level AUC90.9
19
Face Forgery DetectionDFDCP
Video-level AUC0.913
15
Face Forgery DetectionCDF v2
Video-Level AUC94.1
12
Face Forgery DetectionDF40
Uniface Score91.8
11
Showing 10 of 11 rows

Other info

Follow for update