Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Deepfake detection of occluded images using a patch-based approach

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

DeepFake involves the use of deep learning and artificial intelligence techniques to produce or change video and image contents typically generated by GANs. Moreover, it can be misused and leads to fictitious news, ethical and financial crimes, and also affects the performance of facial recognition systems. Thus, detection of real or fake images is significant specially to authenticate originality of people’s images or videos. One of the most important challenges in this topic is obstruction that decreases the system precision. In this study, we present a deep learning approach using the entire face and face patches to distinguish real/fake images in the presence of limitations of blurring, compression, scaling and especially obstruction with a three-path decision: first entire-face reasoning, second a decision based on the concatenation of feature vectors of face patches, and third a majority vote decision based on these features. To test our approach, new data sets including real and fake images are created. For producing fake images, StyleGAN and StyleGAN2 are trained by FFHQ images and also StarGAN and PGGAN are trained by CelebA images. The CelebA and FFHQ data sets are used as real images. The proposed approach reaches higher results in early epochs than other methods and increases the SoTA results by 0.4%\(-\)7.9% in the different built data sets. In addition, we have shown in experimental results that weighing the patches may improve accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Data availability

Public data sets are available and synthesized images can be easily generated by public codes of GANs. All of them are in the Internet.

Notes

  1. shaoanlu/face_toolbox_keras.

References

  1. Meena, K.B., Tyagi, V.: Image forgery detection: survey and future directions. In: Data, pp. 163–194. Springer, Engineering and applications (2019)

  2. Akhtar, Z., Dasgupta, D., Banerjee, B.: Face authenticity: an overview of face manipulation generation, detection and recognition. SSRN Electron. J. (2019). https://doi.org/10.2139/ssrn.3419272

    Article  Google Scholar 

  3. Wu, X., Xu, K., Hall, P.: A survey of image synthesis and editing with generative adversarial networks. Tsinghua Sci. Technol. 22(6), 660–674 (2017)

    Article  MATH  Google Scholar 

  4. Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., Ortega-Garcia, J.: Deepfakes and beyond: A survey of face manipulation and fake detection. arXiv preprint arXiv:2001.00179 (2020)

  5. Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8798-8807 (2018)

  6. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)

  7. Chen, C., McCloskey, S., Yu, J.: Focus manipulation detection via photometric histogram analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1674-1682 (2018)

  8. Farid, H.: Image forgery detection. IEEE Signal Process. Mag. 26(2), 16–25 (2009)

    Article  Google Scholar 

  9. Hsu, C.-C., Lee, C.-Y., Zhuang, Y.-X.: Learning to detect fake face images in the wild. In: 2018 International Symposium on Computer, Consumer and Control (IS3C), IEEE, pp. 388-391 (2018)

  10. Mo, H., Chen, B., Luo, W.: Fake faces identification via convolutional neural network. In: Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, pp. 43-47 (2018)

  11. Marra, F., Gragnaniello, D., Cozzolino, D., Verdoliva, L.: Detection of GAN-generated fake images over social networks. In: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), IEEE, pp. 384-389 (2018)

  12. Neves, J.C., Tolosana, R., Vera-Rodriguez, R., Lopes, V., Proença, H., Fierrez, J.: Ganprintr: improved fakes and evaluation of the state of the art in face manipulation detection. IEEE J. Sel. Topics Signal Process. 14(5), 1038–1048 (2020)

    Article  Google Scholar 

  13. Li, H., Li, B., Tan, S., Huang, J.: “Detection of deep network generated images using disparities in color components. arXiv (2018),” arXiv preprint arXiv:1808.07276

  14. McCloskey, S., Albright, M.: Detecting GAN-generated imagery using color cues, arXiv preprint arXiv:1812.08247 (2018)

  15. Yu, N., Davis, L.S., Fritz, M.: Attributing fake images to GANs: Learning and analyzing GAN fingerprints. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7556-7566 (2019)

  16. Wang, R.: et al.: Fakespotter: a simple yet robust baseline for spotting ai-synthesized fake faces. arXiv preprint arXiv:1909.06122 (2019)

  17. Dang, H., Liu, F., Stehouwer, J., Liu, X., Jain, A.K.: On the detection of digital face manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition, pp. 5781-5790 (2020)

  18. Gatys, L., Ecker, AS., Bethge, M.: Texture synthesis using convolutional neural networks. In Advances in neural information processing systems, pages 262–270 (2015)

  19. Liu, Z., Qi, X., Torr, P.H.: Global texture enhancement for fake face detection in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8060-8069 (2020)

  20. Jain, A., Singh, R., Vatsa, M.: On detecting GANs and retouching based synthetic alterations. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), IEEE, pp. 1-7 (2018)

  21. Bharati, A., Singh, R., Vatsa, M., Bowyer, K.W.: Detecting facial retouching using supervised deep learning. IEEE Trans. Inf. Forensics Secur. 11(9), 1903–1913 (2016)

    Article  Google Scholar 

  22. Jain, A., Majumdar, P., Singh, R., Vatsa, M.: Detecting GANs and retouching based digital alterations via DAD-HCNN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 672-673 (2020)

  23. Nataraj, L., et al.: Detecting GAN generated fake images using co-occurrence matrices. Electron. Imaging 2019(5), 532-1–532-7 (2019)

    Article  Google Scholar 

  24. Zhuang, Y.-X., Hsu, C.-C.: Detecting generated image based on a coupled network with two-step pairwise learning. In: 2019 IEEE International Conference on Image Processing (ICIP), IEEE, pp. 3212-3216 (2019)

  25. Hsu, C.-C., Zhuang, Y.-X., Lee, C.-Y.: Deep fake image detection based on pairwise learning. Appl. Sci. 10(1), 370 (2020)

    Article  Google Scholar 

  26. Ciftci, U.A., Demir, I., Yin, L.: Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE transactions on pattern analysis and machine intelligence (2020)

  27. Ismail, A., Elpeltagy, M., Zaki, M., ElDahshan, K.A.: Deepfake video detection: YOLO-Face convolution recurrent approach. PeerJ Comput. Sci. 7, e730 (2021)

    Article  Google Scholar 

  28. Ismail, A., Elpeltagy, M., Zaki, M.S., Eldahshan, K.: A new deep learning-based methodology for video deepfake detection using XGBoost. Sensors 21(16), 5413 (2021)

    Article  Google Scholar 

  29. Ismail, A., Elpeltagy, M., Zaki, M.S., Eldahshan, K.: An integrated spatiotemporal-based methodology for deepfake detection. Neural Comput. Appl. 34(24), 21777–21791 (2022)

    Article  Google Scholar 

  30. Karras, T., Laine, S., Aila, T.: A style based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 4401-4410 (2019)

  31. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1(2), 4 (2020)

  32. Choi, Y., et al.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8789–8797 (2018)

  33. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability,and variation. arXiv preprint arXiv:1710.10196 (2017)

  34. Liu, Z., et al.: Deep learning face attributes in the wild”. In: Proceedings of the IEEE international conference on computer vision. pp. 3730-3738 (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohsen Ebrahimi Moghaddam.

Additional information

Communicated by Q. Shen.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soleimani, M., Nazari, A. & Moghaddam, M.E. Deepfake detection of occluded images using a patch-based approach. Multimedia Systems 29, 2669–2687 (2023). https://doi.org/10.1007/s00530-023-01140-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-023-01140-8

Keywords

Navigation