Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ProSDD: Learning Prosodic Representations for Speech Deepfake Detection against Expressive and Emotional Attacks

About

Speech deepfake detection (SDD) systems perform well on standard benchmarks datasets but often fail to generalize to expressive and emotional spoofing attacks. Many methods rely on spoof-heavy training data, learning dataset-specific artifacts rather than transferable cues of natural speech. In contrast, humans internalize variability in real speech and detect fakes as deviations from it. We introduce ProSDD, a two-stage framework that enriches model embeddings through supervised masked prediction of speaker-conditioned prosodic variation based on pitch, voice activity, and energy. Stage I learns prosodic variability from real speech, and Stage II jointly optimizes this objective with spoof classification. ProSDD consistently outperforms baselines under both ASVspoof 2019 and 2024 training, reducing ASVspoof 2024 EER from 25.43% to 16.14% (2019-trained) and from 39.62% to 7.38% (2024-trained), while achieving 50% relative reductions on EmoFake and EmoSpoof-TTS.

Aurosweta Mahapatra, Ismail Rasim Ulgen, Kong Aik Lee, Nicholas Andrews, Berrak Sisman• 2026

Related benchmarks

TaskDatasetResultRank
Audio Deepfake DetectionASVspoof 2021
EER3.87
39
Audio Deepfake DetectionASVspoof 2019
EER19.04
37
Deepfake Audio DetectionASVspoof LA 2019
EER (%)0.42
20
Audio Deepfake DetectionASVspoof 2024
EER7.38
16
Speech Deepfake DetectionEmoFake
EER3.7
12
Speech Deepfake DetectionEmoSpoof
EER9.54
12
Audio Deepfake DetectionASVspoof 2024
EER (%)7.38
8
Showing 7 of 7 rows

Other info

Follow for update