Nothing Special   »   [go: up one dir, main page]

Skip to main content

End-To-End Deep Learning for pNN50 Estimation Using a Spatiotemporal Representation

  • Conference paper
  • First Online:
HCI International 2021 - Posters (HCII 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1420))

Included in the following conference series:

Abstract

Various industries widely use emotion estimation to evaluate their consumer satisfaction towards their products. Generally, emotion can be estimated based on observable expressions such as facial expression, or unobservable expressions such as biological signals. Although used by many research, the Facial Expression Recognition has a lack of precision for expressions that are very similar to each other or a situation where the shown expression differs from the real subject’s emotion. On the other hand, biological signal indexes such as pNN50 can act as a supportive mechanism to improve emotion estimation from observable expressions such as facial expression recognition method. pNN50 is a reliable index to estimate stress-relax, and it originates from unconscious emotions that cannot be manipulated. In this work, we propose a method for pNN50 estimation from facial video using a Deep Learning model. Transfer learning technique and a pre-trained Image recognition Convolutional Neural Network (CNN) model are employed to estimate pNN50 based on a spatiotemporal map created from a series of frames in a facial video. The model which trained on low, middle, and high pNN50 values, shows an accuracy of about 80%. Therefore, it indicates the potential of our proposed method, and we can expand it to categorize the more detailed level of pNN50 values.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Spezialetti, M., Placidi, G., Rossi, S.: Emotion recognition for human-robot interaction: recent advances and future perspectives. Front. Rob. A I, 7 (2020). https://doi.org/10.3389/frobt.2020.532279

    Article  Google Scholar 

  2. Kajihara, Y., Sripian, P., Feng, C., Sugaya, M.: Emotion synchronization method for robot facial expression. In: Kurosu, M. (ed.) HCII 2020. LNCS, vol. 12182, pp. 644–653. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49062-1_44

    Chapter  Google Scholar 

  3. Kowalczuk, Z., Czubenko, M., Merta, T.: Emotion monitoring system for drivers. IFAC-PapersOnLine 52(8), 200–205 (2019). https://doi.org/10.1016/j.ifacol.2019.08.071

    Article  Google Scholar 

  4. Kowalczuk, Z., Czubenko, M., Merta, T.: Emotion monitoring system for drivers. IFAC-PapersOnLine 52(8), 200–205 (2019). https://doi.org/10.1016/b978-0-12-415781-1.00007-8

  5. Vo, T., Lee, G., Yang, H., Kim, S.: Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access 8, 131988–132001 (2020). https://doi.org/10.1109/access.2020.3010018

    Article  Google Scholar 

  6. Burkert, P., Trier, F., Afzal Muhammad, Z., Dengel, A., Liwicki, M.: DeXpression: deep convolutional neural network for expression recognition (2015)

    Google Scholar 

  7. Sugaya, M., Watanabe, I., Yoshida, R., Chen, F.: Human emotional state analysis during driving simulation experiment using bio-emotion estimation method. In: 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC) (2018). https://doi.org/10.1109/compsac.2018.10301

  8. Ikeda, Y., Sugaya, M.: Estimate emotion method to use biological, symbolic information preliminary experiment. In: Schmorrow, D.D.D., Fidopiastis, C.M.M. (eds.) AC 2016. LNCS (LNAI), vol. 9743, pp. 332–340. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39955-3_31

    Chapter  Google Scholar 

  9. Kim, H.G., Cheon, E.J., Bai, D.S., Lee, Y.H., Koo, B.H.: Stress and heart rate variability: a meta-analysis and review of the literature. Psychiatry Invest. 15(3), 235–245 (2018). https://doi.org/10.30773/pi.2017.08.17

  10. Wang, W.: Robust and automatic remote photoplethysmography. Technische Universiteit Eindhoven (2017)

    Google Scholar 

  11. Bobbia, S., Macwan, R., Benezeth, Y., Mansouri, A., Dubois, J.: Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recogn. Lett. 124, 82–90 (2017). https://doi.org/10.1016/j.patrec.2017.10.017

    Article  Google Scholar 

  12. Niu, X., Shan, S., Han, H., Chen, X.: RhythmNet: end-to-end heart rate estimation from face via spatial-temporal representation. IEEE Trans. Image Process. 29, 2409–2423 (2020). https://doi.org/10.1109/tip.2019.2947204

    Article  Google Scholar 

  13. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by JSPS KAKENHI Grant Number JP19K20302.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sayyedjavad Ziaratnia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ziaratnia, S., Sripian, P., Laohakangvalvit, T., Ohzeki, K., Sugaya, M. (2021). End-To-End Deep Learning for pNN50 Estimation Using a Spatiotemporal Representation. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Posters. HCII 2021. Communications in Computer and Information Science, vol 1420. Springer, Cham. https://doi.org/10.1007/978-3-030-78642-7_79

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78642-7_79

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78641-0

  • Online ISBN: 978-3-030-78642-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics