Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-Grained Frame Modeling in Multi-head Self-Attention for Speech Deepfake Detection

About

Transformer-based models have shown strong performance in speech deepfake detection, largely due to the effectiveness of the multi-head self-attention (MHSA) mechanism. MHSA provides frame-level attention scores, which are particularly valuable because deepfake artifacts often occur in small, localized regions along the temporal dimension of speech. This makes fine-grained frame modeling essential for accurately detecting subtle spoofing cues. In this work, we propose fine-grained frame modeling (FGFM) for MHSA-based speech deepfake detection, where the most informative frames are first selected through a multi-head voting (MHV) module. These selected frames are then refined via a cross-layer refinement (CLR) module to enhance the model's ability to learn subtle spoofing cues. Experimental results demonstrate that our method outperforms the baseline model and achieves Equal Error Rate (EER) of 0.90%, 1.88%, and 6.64% on the LA21, DF21, and ITW datasets, respectively. These consistent improvements across multiple benchmarks highlight the effectiveness of our fine-grained modeling for robust speech deepfake detection.

Tuan Dat Phuong, Duc-Tuan Truong, Long-Vu Hoang, Trang Nguyen Thi Thu• 2026

Related benchmarks

TaskDatasetResultRank
Spoof Speech DetectionASVspoof LA 2021 (eval)--
36
Synthetic Speech DetectionASVspoof DF 2021 (eval)
EER (%)1.88
19
Speech Spoofing DetectionIn-the-Wild (ITW) (eval)
EER6.31
19
Showing 3 of 3 rows

Other info

Follow for update