Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Not made for each other- Audio-Visual Dissonance-based Deepfake Detection and Localization

About

We propose detection of deepfake videos based on the dissimilarity between the audio and visual modalities, termed as the Modality Dissonance Score (MDS). We hypothesize that manipulation of either modality will lead to dis-harmony between the two modalities, eg, loss of lip-sync, unnatural facial and lip movements, etc. MDS is computed as an aggregate of dissimilarity scores between audio and visual segments in a video. Discriminative features are learnt for the audio and visual channels in a chunk-wise manner, employing the cross-entropy loss for individual modalities, and a contrastive loss that models inter-modality similarity. Extensive experiments on the DFDC and DeepFake-TIMIT Datasets show that our approach outperforms the state-of-the-art by up to 7%. We also demonstrate temporal forgery localization, and show how our technique identifies the manipulated video segments.

Komal Chugh, Parul Gupta, Abhinav Dhall, Ramanathan Subramanian• 2020

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionDFDC (test)
AUC73.8
87
Audio-visual video forgery detectionFakeAVCeleb
Accuracy69.29
41
Deepfake DetectionFakeAVCeleb (test)
Accuracy82.8
39
Deepfake DetectionDeepfakeTIMIT LQ
AUC97.92
19
Deepfake DetectionDeepfakeTIMIT HQ
AUC0.9687
19
Audio-Visual Deepfake DetectionFakeAVCeleb
Accuracy82.8
11
Audio-Visual Deepfake DetectionDeepFake Detection Challenge (DFDC)
Accuracy89.8
11
Deepfake DetectionAV-Deepfake1M official (test)
AUC0.5657
11
Temporal Forgery LocalizationLAV-DF 1.0
AP@0.523.43
7
Temporal Forgery LocalizationLAV-DF 1.0 (full set)
AP@0.512.78
7
Showing 10 of 11 rows

Other info

Follow for update