SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection
About
Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized from generative AI models. Existing ADD models suffer from generalization issues, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the StyleLInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes. When the feature encoders are frozen, SLIM outperforms benchmark methods on out-of-domain datasets while achieving competitive results on in-domain data. The features learned by SLIM allow us to quantify the (mis)match between style and linguistic content in a sample, hence facilitating an explanation of the model decision.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Deepfake Detection | in the wild | EER12.5 | 58 | |
| Audio Deepfake Detection | ASVspoof 2021 | EER4.4 | 27 | |
| Audio Deepfake Detection | ASVspoof 2019 | EER0.2 | 25 | |
| Audio Deepfake Detection | MLAAD-EN | EER10.7 | 18 | |
| Audio Deepfake Detection | ASVspoof LA and DF 2021 | EER (DF)4.4 | 17 | |
| Deepfake Audio Detection | ASVspoof LA 2019 | EER (%)20 | 12 |