Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explainable Deepfake Detection with RL Enhanced Self-Blended Images

About

Most prior deepfake detection methods lack explainable outputs. With the growing interest in multimodal large language models (MLLMs), researchers have started exploring their use in interpretable deepfake detection. However, a major obstacle in applying MLLMs to this task is the scarcity of high-quality datasets with detailed forgery attribution annotations, as textual annotation is both costly and challenging - particularly for high-fidelity forged images or videos. Moreover, multiple studies have shown that reinforcement learning (RL) can substantially enhance performance in visual tasks, especially in improving cross-domain generalization. To facilitate the adoption of mainstream MLLM frameworks in deepfake detection with reduced annotation cost, and to investigate the potential of RL in this context, we propose an automated Chain-of-Thought (CoT) data generation framework based on Self-Blended Images, along with an RL-enhanced deepfake detection framework. Extensive experiments validate the effectiveness of our CoT data construction pipeline, tailored reward mechanism, and feedback-driven synthetic data generation approach. Our method achieves performance competitive with state-of-the-art (SOTA) approaches across multiple cross-dataset benchmarks. Implementation details are available at https://github.com/deon1219/rlsbi.

Ning Jiang, Dingheng Zeng, Yanhong Liu, Haiyang Yi, Shijie Yu, Minghe Weng, Haifeng Shen, Ying Li• 2026

Related benchmarks

TaskDatasetResultRank
Frame-level Deepfake DetectionDFD
AUC92.6
28
Frame-level Deepfake DetectionDFDC-P
AUC81.7
28
Video-level Deepfake DetectionCDF2
AUC96.3
13
Video-level Deepfake DetectionDFDC
AUC0.839
13
Frame-level Deepfake DetectionCDF2
AUC0.905
12
Video-level Deepfake DetectionDFDCP
AUC84.9
12
Video-level Deepfake DetectionDFD
AUC0.965
11
Frame-level Deepfake DetectionDFDC
AUC0.818
9
Showing 8 of 8 rows

Other info

Follow for update