Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Deepfake detection based on remote photoplethysmography

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Deepfake, which superimpose an image of one person’s face on another person’s image. However, the number of malicious Deepfake uses is largely dominant. Deepfake technology has also improved in recent years, making previously effective detection methods less effective in the new fake videos. There is currently not a very effective Deepfake detection method. DeepPhys is an end-to-end system based on deep convolutional network, which can be used to remotely measure biological signals such as human heart rate and respiratory rate in video. In this paper, we first introduce the Deepfake and its background, the current mainstream Deepfake detection methods and the related research situation. Then, we introduce the Remote Photoplethysmography(rPPG) and introduce the method of Deepfake detection based on rPPG. Then it introduces the traditional method of rPPG and the specific implementation method of DeepPhys, and then compares the gap between DeepPhys and traditional rPPG. Finally, by comparing with several state-of-the art rPPG methods, the DeepPhys model trained in this experiment can better detect the biological information in the video and achieve a high availability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Chao S, Wang C, Lai W (2019) Gait analysis and recognition prediction of the human skeleton based on migration learning. Phys A Stat Mech Appl 532:121812

    Article  MATH  Google Scholar 

  2. Chen W, McDuff D (2018) Deepphys: Video-based physiological measurement using convolutional attention networks. In: Proceedings of the European conference on computer vision (ECCV), pp 349–365

  3. De Haan G, Van Leest A (2014) Improved motion robustness of remote-PPG by using the blood volume pulse signature. Physiol Meas 35(9):1913

    Article  Google Scholar 

  4. De Haan G, Jeanne V (2013) Robust pulse rate from chrominance-based rPPG. IEEE Trans Biomed Eng 60(10):2878–2886

    Article  Google Scholar 

  5. Dolhansky B, Howes R, Pflaum B et al (2019) The DeepFake detection challenge (DFDC) preview dataset. arXiv:1910.08854

  6. Dong X, Bao J, Chen D et al (2020) Identity-driven DeepFake detection. arXiv:2012.03930

  7. Güera D, Delp EJ (2018) Deepfake video detection using recurrent neural networks. In: 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, pp 1–6

  8. Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial networks. arXiv:1406.2661

  9. Hernandez-Ortega J, Tolosana R, Fierrez J et al (2020a) DeepFakesON-Phys: DeepFakes detection based on heart rate estimation. arXiv:2010.00400

  10. Hernandez-Ortega J, Fierrez J, Morales A et al (2020b) A comparative evaluation of heart rate estimation methods using face videos. In: 2020 IEEE 44th annual computers, software, and applications conference (COMPSAC). IEEE, pp 1438–1443

  11. Isola P, Zhu JY, Zhou T et al (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134

  12. Lam A, Kuno Y (2015) Robust heart rate measurement from video using select random patches. In: Proceedings of the IEEE international conference on computer vision, pp 3640–3648

  13. Li Y, Lyu S (2018) Exposing deepfake videos by detecting face warping artifacts. arXiv:1811.00656

  14. Li L, Bao J, Zhang T et al (2020) Face X-ray for more general face forgery detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5001–5010

  15. Matern F, Riess C, Stamminger M (2019) Exploiting visual artifacts to expose deepfakes and face manipulations. In: 2019 IEEE winter applications of computer vision workshops (WACVW). IEEE, pp 83–92

  16. Masi I, Killekar A, Mascarenhas RM et al (2020) Two-branch recurrent network for isolating deepfakes in videos. In: European conference on computer vision. Springer, Cham, pp 667–684

  17. Mirsky Y, Lee W (2021) The creation and detection of deepfakes: A survey. ACM Comput Surv (CSUR) 54(1):1–41

    Article  Google Scholar 

  18. Monkaresi H, Calvo RA, Yan H (2013) A machine learning approach to improve contactless heart rate monitoring using a webcam. IEEE J Biomed Health Inform 18(4):1153–1160

    Article  Google Scholar 

  19. Osman A, Turcot J, El Kaliouby R (2015) Supervised learning approach to remote heart rate estimation from facial videos. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG). IEEE, pp 1: 1–6

  20. Poh MZ, McDuff DJ, Picard RW (2010) Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE Trans Biomed Eng 58(1):7–11

    Article  Google Scholar 

  21. Pokroy AA, Egorov AD (2021) EfficientNets for DeepFake detection: comparison of pretrained models. In: 2021 IEEE conference of russian young researchers in electrical and electronic engineering (ElConRus), pp 598–600. https://doi.org/10.1109/ElConRus51938.2021.9396092

  22. Qian Y, Yin G, Sheng L et al (2020) Thinking in frequency: Face forgery detection by mining frequency-aware clues. In: European conference on computer vision. Springer, Cham, pp 86–103

  23. Tolosana R, Romero-Tapiador S, Fierrez J et al (2020) DeepFakes evolution: analysis of facial regions and fake detection performance. arXiv:2004.07532

  24. Wang W, Stuijk S, De Haan G (2014) Exploiting spatial redundancy of image sensor for motion robust rPPG. IEEE Trans Biomed Eng 62(2):415–425

    Article  Google Scholar 

  25. Wang W, den Brinker AC, Stuijk S et al (2016) Algorithmic principles of remote PPG. IEEE Trans Biomed Eng 64(7):1479–1491

    Article  Google Scholar 

  26. Wang TC, Liu MY, Zhu JY et al (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8798–8807

  27. Xiang J, Zhu G (2017) Joint face detection and facial expression recognition with MTCNN. In: 2017 4th international conference on information science and control engineering (ICISCE). IEEE, pp 424–427

  28. Yang X, Li Y, Lyu S (2019) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 8261–8265

  29. Yu Z, Li X, Zhao G (2019) Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks. arXiv:1905.02419

  30. Zhu JY, Park T, Isola P et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

Download references

Acknowledgements

The Project was supported by Guangzhou Science and Technology Plan Project (No.201903010103), the “13th Five-Year Plan” for the development of Philosophy and Social Sciences in Guangzhou (No.2018GZYB36), Science Foundation of Guangdong Provincial Communications Department, China (No.N2015-02-064), The Ministry of Education’s 2018 first batch of Industry-University Cooperation Collaborative Education Information Security curriculum system construction projects (201801087012).

Funding

The Project was supported by Guangzhou Science and Technology Plan Project (No.201903010103), the “13th Five-Year Plan” for the development of Philosophy and Social Sciences in Guangzhou (No.2018GZYB36), Science Foundation of Guangdong Provincial Communications Department, China (No.N2015-02-064), The Ministry of Education’s 2018 first batch of Industry-University Cooperation Collaborative Education Information Security curriculum system construction projects (201801087012).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the research, experiment and manuscript. Qingzhen Xu and Han Qiao were responsible for the design of the algorithm and the preparation of the experiment. The experiment and related discussion were performed by Qingzhen Xu, Han Qiao, Shuang Liu, and Shouqiang Liu. Qingzhen Xu, Han Qiao and Shouqiang Liu wrote the manuscript. Shuang Liu and Shouqiang Liu were responsible for the final optimization. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shouqiang Liu.

Ethics declarations

Ethics approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

This article does not contain any studies with animals performed by any of the authors. Informed consent was obtained from all individual participants included in the study.

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix:

Appendix:

As shown in the Fig. 5, there are examples of real and fake frames [5]. The picture on the left is the real one, and the picture on the right is the composite one. We distinguish the true and false images by processing the BVP value of the face data.

Fig. 5
figure 5

Examples of real and fake frames

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, Q., Qiao, H., Liu, S. et al. Deepfake detection based on remote photoplethysmography. Multimed Tools Appl 82, 35439–35456 (2023). https://doi.org/10.1007/s11042-023-14744-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14744-z

Keywords

Navigation