Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization

About

In this paper, we analyse the generalization ability of binary classifiers for the task of deepfake detection. We find that the stumbling block to their generalization is caused by the unexpected learned identity representation on images. Termed as the Implicit Identity Leakage, this phenomenon has been qualitatively and quantitatively verified among various DNNs. Furthermore, based on such understanding, we propose a simple yet effective method named the ID-unaware Deepfake Detection Model to reduce the influence of this phenomenon. Extensive experimental results demonstrate that our method outperforms the state-of-the-art in both in-dataset and cross-dataset evaluation. The code is available at https://github.com/megvii-research/CADDM.

Shichao Dong, Jin Wang, Renhe Ji, Jiajun Liang, Haoqiang Fan, Zheng Ge• 2022

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionDFDC (test)--
122
Deepfake DetectionCelebDF v2
AUC0.939
57
Deepfake DetectionCelebDF (CDF) v2 (test)
AUC80.7
52
Face Forgery DetectionDFDC--
52
Deepfake DetectionFF++ (test)
AUC99.79
44
Frame-level Deepfake DetectionDFD
AUC82.9
42
Deepfake DetectionCeleb-DF (test)
Accuracy91.63
40
Video-level Deepfake DetectionDFDC
AUC0.739
34
Deepfake DetectionCelebDF (test)
AUC0.9388
30
Frame-level Deepfake DetectionDFDC-P
AUC72.45
28
Showing 10 of 33 rows

Other info

Code

Follow for update