Nothing Special   »   [go: up one dir, main page]

Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13644))

Included in the following conference series:

Abstract

Facial forgery by deepfakes has raised severe societal concerns. Several solutions have been proposed by the vision community to effectively combat the misinformation on the internet via automated deepfake detection systems. Recent studies have demonstrated that facial analysis-based deep learning models can discriminate based on protected attributes. For the commercial adoption and massive roll-out of the deepfake detection technology, it is vital to evaluate and understand the fairness (the absence of any prejudice or favoritism) of deepfake detectors across demographic variations such as gender and race. As the performance differential of deepfake detectors between demographic sub-groups would impact millions of people of the deprived sub-group. This paper aims to evaluate the fairness of the deepfake detectors across males and females. However, existing deepfake datasets are not annotated with demographic labels to facilitate fairness analysis. To this aim, we manually annotated existing popular deepfake datasets with gender labels and evaluated the performance differential of current deepfake detectors across gender. Our analysis on the gender-labeled version of the datasets suggests (a) current deepfake datasets have skewed distribution across gender, and (b) commonly adopted deepfake detectors obtain unequal performance across gender with mostly males outperforming females. Finally, we contributed a gender-balanced and annotated deepfake dataset, GBDF, to mitigate the performance differential and to promote research and development towards fairness-aware deep fake detectors. The GBDF dataset is publicly available at: https://github.com/aakash4305/GBDF

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/.

  2. 2.

    https://www.wired.com/story/facebook-removes-accounts-ai-generated-photos/.

  3. 3.

    https://c2pa.org/post/release_1_pr/.

  4. 4.

    https://www.nist.gov/system/files/documents/2019/11/20/frvt_report_2019_11_19_0.pdf.

  5. 5.

    https://github.com/HongguLiu/MesoNet-Pytorch.

  6. 6.

    https://github.com/i3p9/deepfake-detection-with-xception.

  7. 7.

    https://github.com/d-li14/efficientnetv2.pytorch.

  8. 8.

    https://github.com/ahaliassos/LipForensics.

  9. 9.

    https://github.com/oidelima/Deepfake-Detection.

References

  1. Afchar, D., Nozick, V., Yamagishi, J., Echizen, I.: MesoNet: a compact facial video forgery detection network. In: 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–7 (2018). https://doi.org/10.1109/WIFS.2018.8630761

  2. Agarwal, S., Farid, H., El-Gaaly, T., Lim, S.N.: Detecting deep-fake videos from appearance and behavior. In: 2020 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6 (2020). https://doi.org/10.1109/WIFS49906.2020.9360904

  3. Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., Li, H.: Protecting world leaders against deep fakes. In: CVPR Workshops (2019)

    Google Scholar 

  4. Albiero, V., Zhang, K., Bowyer, K.W.: How does gender balance in training data affect face recognition accuracy? In: 2020 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–10. IEEE (2020)

    Google Scholar 

  5. Albiero, V., Zhang, K., King, M.C., Bowyer, K.W.: Gendered differences in face recognition accuracy explained by hairstyles, makeup, and facial morphology. Trans. Info. For. Sec. 17, 127–137 (2022). https://doi.org/10.1109/TIFS.2021.3135750

  6. Amerini, I., Caldelli, R.: Exploiting prediction error inconsistencies through LSTM-based classifiers to detect DeepFake videos. In: Proceedings of the 2020 ACM Workshop on Information Hiding and Multimedia Security, IH &MMSec 2020, pp. 97–102. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3369412.3395070

  7. Biometrics, I.J.S.: ISO/IEC WD TR 22116. In: Information Technology - Biometrics - Identifying and Mitigating the Differential Impact of Demographic Factors in Biometric Systems (unpublished)

    Google Scholar 

  8. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)

    Google Scholar 

  9. Cellan-Jones, R.: DeepFake videos ‘double in nine months’, October 2019. https://www.bbc.com/news/technology-49961089

  10. Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807. IEEE Computer Society, Los Alamitos, CA, USA, July 2017. https://doi.org/10.1109/CVPR.2017.195. https://doi.ieeecomputersociety.org/10.1109/CVPR.2017.195

  11. Citron, D.: How DeepFakes undermine truth and threaten democracy. https://www.ted.com/talks/danielle_citron_how_deepfakes_undermine_truth_and_threaten_democracy?language=en

  12. Dolhansky, B., et al.: The DeepFake detection challenge (DFDC) dataset (2020). https://doi.org/10.48550/ARXIV.2006.07397. https://arxiv.org/abs/2006.07397

  13. Dong, X., et al.: Identity-driven DeepFake detection. ArXiv abs/2012.03930 (2020)

    Google Scholar 

  14. Haliassos, A., Vougioukas, K., Petridis, S., Pantic, M.: Lips don’t lie: a generalisable and robust approach to face forgery detection. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5037–5047 (2021). https://doi.org/10.1109/CVPR46437.2021.00500

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  16. Jiang, L., Li, R., Wu, W., Qian, C., Loy, C.C.: DeeperForensics-1.0: a large-scale dataset for real-world face forgery detection. In: CVPR (2020)

    Google Scholar 

  17. Karkkainen, K., Joo, J.: FairFace: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1548–1558 (2021)

    Google Scholar 

  18. Krishnan, A., Almadan, A., Rattani, A.: Understanding fairness of gender classification algorithms across gender-race groups. In: 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1028–1035 (2020). https://doi.org/10.1109/ICMLA51294.2020.00167

  19. Li, L., et al.: Face x-ray for more general face forgery detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5000–5009 (2020). https://doi.org/10.1109/CVPR42600.2020.00505

  20. Li, Y., Lyu, S.: Exposing DeepFake videos by detecting face warping artifacts. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019

    Google Scholar 

  21. Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S.: Celeb-DF: a large-scale challenging dataset for DeepFake forensics. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3204–3213 (2020). https://doi.org/10.1109/CVPR42600.2020.00327

  22. Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose DeepFakes and face manipulations. In: 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92 (2019)

    Google Scholar 

  23. Nadimpalli, A.V., Rattani, A.: On improving cross-dataset generalization of DeepFake detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 91–99 (2022)

    Google Scholar 

  24. Nadimpalli, A.V., Reddy, N., Ramachandran, S., Rattani, A.: Harnessing unlabeled data to improve generalization of biometric gender and age classifiers. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7 (2021)

    Google Scholar 

  25. Nguyen, H.H., Fang, F., Yamagishi, J., Echizen, I.: Multi-task learning for detecting and segmenting manipulated facial images and videos. In: 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–8 (2019)

    Google Scholar 

  26. Nguyen, T.T., Nguyen, C.M., Nguyen, D., Nguyen, D.T., Nahavandi, S.: Deep learning for DeepFakes creation and detection. ArXiv abs/1909.11573 (2019)

    Google Scholar 

  27. Ramachandran, S., Nadimpalli, A.V., Rattani, A.: An experimental evaluation on DeepFake detection using deep face recognition. In: 2021 IEEE International Carnahan Conference on Security Technology (ICCST), pp. 1–6 (2021). https://doi.org/10.1109/ICCST49569.2021.9717407

  28. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Niessner, M.: FaceForensics++: learning to detect manipulated facial images. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1–11 (2019). https://doi.org/10.1109/ICCV.2019.00009

  29. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74

  30. Singh, R., Majumdar, P., Mittal, S., Vatsa, M.: Anatomizing bias in facial analysis. arXiv preprint arXiv:2112.06522 (2021)

  31. Tolosana, R., Vera-Rodríguez, R., Fierrez, J., Morales, A., Ortega-Garcia, J.: DeepFakes and beyond: a survey of face manipulation and fake detection. Inf. Fusion 64, 131–148 (2020)

    Article  Google Scholar 

  32. Trinh, L., Liu, Y.: An examination of fairness of AI models for DeepFake detection. ArXiv abs/2105.00558 (2021)

    Google Scholar 

  33. Verdoliva, L.: Media forensics and DeepFakes: an overview. IEEE J. Sel. Top. Sig. Process. 14, 910–932 (2020)

    Article  Google Scholar 

  34. Wang, M., Deng, W., Hu, J., Tao, X., Huang, Y.: Racial faces in the wild: reducing racial bias by information maximization adaptation network. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 692–702 (2019)

    Google Scholar 

  35. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016). https://doi.org/10.1109/LSP.2016.2603342

    Article  Google Scholar 

  36. Zhao, H., Wei, T., Zhou, W., Zhang, W., Chen, D., Yu, N.: Multi-attentional DeepFake detection. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2185–2194 (2021). https://doi.org/10.1109/CVPR46437.2021.00222

Download references

Acknowledgement

This work is supported in part from National Science Foundation (NSF) award no. 2129173. The research infrastructure used in this study is supported in part from a grant no. 13106715 from the Defense University Research Instrumentation Program (DURIP) from Air Force Office of Scientific Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ajita Rattani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nadimpalli, A.V., Rattani, A. (2023). GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection. In: Rousseau, JJ., Kapralos, B. (eds) Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. ICPR 2022. Lecture Notes in Computer Science, vol 13644. Springer, Cham. https://doi.org/10.1007/978-3-031-37742-6_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37742-6_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37741-9

  • Online ISBN: 978-3-031-37742-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics