ProSDD: Learning Prosodic Representations for Speech Deepfake Detection against Expressive and Emotional Attacks
About
Speech deepfake detection (SDD) systems perform well on standard benchmarks datasets but often fail to generalize to expressive and emotional spoofing attacks. Many methods rely on spoof-heavy training data, learning dataset-specific artifacts rather than transferable cues of natural speech. In contrast, humans internalize variability in real speech and detect fakes as deviations from it. We introduce ProSDD, a two-stage framework that enriches model embeddings through supervised masked prediction of speaker-conditioned prosodic variation based on pitch, voice activity, and energy. Stage I learns prosodic variability from real speech, and Stage II jointly optimizes this objective with spoof classification. ProSDD consistently outperforms baselines under both ASVspoof 2019 and 2024 training, reducing ASVspoof 2024 EER from 25.43% to 16.14% (2019-trained) and from 39.62% to 7.38% (2024-trained), while achieving 50% relative reductions on EmoFake and EmoSpoof-TTS.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Deepfake Detection | ASVspoof 2021 | EER3.87 | 39 | |
| Audio Deepfake Detection | ASVspoof 2019 | EER19.04 | 37 | |
| Deepfake Audio Detection | ASVspoof LA 2019 | EER (%)0.42 | 20 | |
| Audio Deepfake Detection | ASVspoof 2024 | EER7.38 | 16 | |
| Speech Deepfake Detection | EmoFake | EER3.7 | 12 | |
| Speech Deepfake Detection | EmoSpoof | EER9.54 | 12 | |
| Audio Deepfake Detection | ASVspoof 2024 | EER (%)7.38 | 8 |