FairAdapter: Detecting AI-generated Images with Improved Fairness
About
The high-quality, realistic images generated by generative models pose significant challenges for exposing them.So far, data-driven deep neural networks have been justified as the most efficient forensics tools for the challenges. However, they may be over-fitted to certain semantics, resulting in considerable inconsistency in detection performance across different contents of generated samples. It could be regarded as an issue of detection fairness. In this paper, we propose a novel framework named Fairadapter to tackle the issue. In comparison with existing state-of-the-art methods, our model achieves improved fairness performance. Our project: https://github.com/AppleDogDog/FairnessDetection
Feng Ding, Jun Zhang, Xinan He, Jianfeng Xu• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Deepfake Detection | Celeb-DF | Gender FFPR8.59 | 22 | |
| Deepfake Detection | FF++ | Gender FFPR4.16 | 15 | |
| Deepfake Detection | FF++ cross-domain | Gender FFPR7.19 | 10 | |
| Deepfake Detection | Celeb-DF cross-domain | Gender FFPR13.29 | 10 | |
| Deepfake Detection | DFD cross-domain | Gender FFPR15.12 | 10 | |
| Deepfake Detection | DFDC | Gender FPR2.39 | 7 | |
| Deepfake Detection | DFD | Gender FPR (F)6.32 | 7 | |
| Deepfake Detection | FF++ Gender (test) | FFPR4.16 | 7 | |
| Deepfake Detection | FF++ Race (test) | FFPR43.22 | 7 | |
| Deepfake Detection | FF++ Intersection (test) | FFPR86.91 | 7 |
Showing 10 of 11 rows