Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Decoupling Bias, Aligning Distributions: Synergistic Fairness Optimization for Deepfake Detection

About

Fairness is a core element in the trustworthy deployment of deepfake detection models, especially in the field of digital identity security. Biases in detection models toward different demographic groups, such as gender and race, may lead to systemic misjudgments, exacerbating the digital divide and social inequities. However, current fairness-enhanced detectors often improve fairness at the cost of detection accuracy. To address this challenge, we propose a dual-mechanism collaborative optimization framework. Our proposed method innovatively integrates structural fairness decoupling and global distribution alignment: decoupling channels sensitive to demographic groups at the model architectural level, and subsequently reducing the distance between the overall sample distribution and the distributions corresponding to each demographic group at the feature level. Experimental results demonstrate that, compared with other methods, our framework improves both inter-group and intra-group fairness while maintaining overall detection accuracy across domains. The code is available at https://github.com/ywh1093/Fairness-Optimization.

Feng Ding, Wenhui Yi, Yunpeng Zhou, Xinan He, Hong Rao, Shu Hu• 2025

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionCeleb-DF
Gender FFPR6.41
22
Deepfake DetectionFF++
Gender FFPR0.53
15
Deepfake DetectionFairFD
DPD0.0159
14
Forgery DetectionFairFD benchmark
DPD0.0159
14
Deepfake DetectionFF++ cross-domain
Gender FFPR1.01
10
Deepfake DetectionDFD cross-domain
Gender FFPR0.61
10
Deepfake DetectionCeleb-DF cross-domain
Gender FFPR2.32
10
Deepfake DetectionFF++ Gender (test)
FFPR0.53
7
Deepfake DetectionDFDC
Gender FPR1.76
7
Deepfake DetectionDFD
Gender FPR (F)4.72
7
Showing 10 of 13 rows

Other info

Follow for update