Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization

About

In this paper, we analyse the generalization ability of binary classifiers for the task of deepfake detection. We find that the stumbling block to their generalization is caused by the unexpected learned identity representation on images. Termed as the Implicit Identity Leakage, this phenomenon has been qualitatively and quantitatively verified among various DNNs. Furthermore, based on such understanding, we propose a simple yet effective method named the ID-unaware Deepfake Detection Model to reduce the influence of this phenomenon. Extensive experimental results demonstrate that our method outperforms the state-of-the-art in both in-dataset and cross-dataset evaluation. The code is available at https://github.com/megvii-research/CADDM.

Shichao Dong, Jin Wang, Renhe Ji, Jiajun Liang, Haoqiang Fan, Zheng Ge• 2022

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionFF++ (test)
AUC99.79
39
Deepfake DetectionCelebDF (test)
AUC0.9388
30
Frame-level Deepfake DetectionDFDC-P
AUC72.45
28
Frame-level Deepfake DetectionDFD
AUC82.9
28
Face Forgery DetectionDFDC--
25
Frame-level Face Forgery DetectionWild Deepfake
AUC72.56
24
Frame-level Deepfake DetectionCeleb-DF
AUC0.7756
18
Frame-level Deepfake DetectionAverage (Celeb-DF, Wild Deepfake, DFDC-P, DFD, DiffSwap)
AUC76.21
13
Frame-level Deepfake DetectionDiffSwap
AUC (%)75.58
13
Face Forgery DetectionCDF v2
Video AUC0.939
11
Showing 10 of 13 rows

Other info

Code

Follow for update