Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Coarse-to-Fine Proposal Refinement Framework for Audio Temporal Forgery Detection and Localization

About

Recently, a novel form of audio partial forgery has posed challenges to its forensics, requiring advanced countermeasures to detect subtle forgery manipulations within long-duration audio. However, existing countermeasures still serve a classification purpose and fail to perform meaningful analysis of the start and end timestamps of partial forgery segments. To address this challenge, we introduce a novel coarse-to-fine proposal refinement framework (CFPRF) that incorporates a frame-level detection network (FDN) and a proposal refinement network (PRN) for audio temporal forgery detection and localization. Specifically, the FDN aims to mine informative inconsistency cues between real and fake frames to obtain discriminative features that are beneficial for roughly indicating forgery regions. The PRN is responsible for predicting confidence scores and regression offsets to refine the coarse-grained proposals derived from the FDN. To learn robust discriminative features, we devise a difference-aware feature learning (DAFL) module guided by contrastive representation learning to enlarge the sensitive differences between different frames induced by minor manipulations. We further design a boundary-aware feature enhancement (BAFE) module to capture the contextual information of multiple transition boundaries and guide the interaction between boundary information and temporal features via a cross-attention mechanism. Extensive experiments show that our CFPRF achieves state-of-the-art performance on various datasets, including LAV-DF, ASVS2019PS, and HAD.

Junyan Wu, Wei Lu, Xiangyang Luo, Rui Yang, Qian Wang, Xiaochun Cao• 2024

Related benchmarks

TaskDatasetResultRank
Audio Spoof DetectionPartialSpoof (PS) (test)
EER7.41
22
Fake DetectionPartialSpoof (dev)
EER1.9
12
Audio Spoof DetectionHalf-truth Audio Detection (HAD)
EER8
5
Speech Editing DetectionAiEdit
Accuracy65.54
5
Speech Editing DetectionPool HumanEdit and AiEdit average
Acc81.62
5
Content LocalizationHumanEdit
Accuracy86.44
5
Speech Editing DetectionHumanEdit
Accuracy97.69
5
Content LocalizationAiEdit
Accuracy90.91
5
Content LocalizationPool HumanEdit and AiEdit average
Accuracy88.68
5
Fake Audio LocalizationPartialSpoof (eval)
EER7.72
4
Showing 10 of 10 rows

Other info

Follow for update