Abstract
Various industries widely use emotion estimation to evaluate their consumer satisfaction towards their products. Generally, emotion can be estimated based on observable expressions such as facial expression, or unobservable expressions such as biological signals. Although used by many research, the Facial Expression Recognition has a lack of precision for expressions that are very similar to each other or a situation where the shown expression differs from the real subject’s emotion. On the other hand, biological signal indexes such as pNN50 can act as a supportive mechanism to improve emotion estimation from observable expressions such as facial expression recognition method. pNN50 is a reliable index to estimate stress-relax, and it originates from unconscious emotions that cannot be manipulated. In this work, we propose a method for pNN50 estimation from facial video using a Deep Learning model. Transfer learning technique and a pre-trained Image recognition Convolutional Neural Network (CNN) model are employed to estimate pNN50 based on a spatiotemporal map created from a series of frames in a facial video. The model which trained on low, middle, and high pNN50 values, shows an accuracy of about 80%. Therefore, it indicates the potential of our proposed method, and we can expand it to categorize the more detailed level of pNN50 values.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Spezialetti, M., Placidi, G., Rossi, S.: Emotion recognition for human-robot interaction: recent advances and future perspectives. Front. Rob. A I, 7 (2020). https://doi.org/10.3389/frobt.2020.532279
Kajihara, Y., Sripian, P., Feng, C., Sugaya, M.: Emotion synchronization method for robot facial expression. In: Kurosu, M. (ed.) HCII 2020. LNCS, vol. 12182, pp. 644–653. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49062-1_44
Kowalczuk, Z., Czubenko, M., Merta, T.: Emotion monitoring system for drivers. IFAC-PapersOnLine 52(8), 200–205 (2019). https://doi.org/10.1016/j.ifacol.2019.08.071
Kowalczuk, Z., Czubenko, M., Merta, T.: Emotion monitoring system for drivers. IFAC-PapersOnLine 52(8), 200–205 (2019). https://doi.org/10.1016/b978-0-12-415781-1.00007-8
Vo, T., Lee, G., Yang, H., Kim, S.: Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access 8, 131988–132001 (2020). https://doi.org/10.1109/access.2020.3010018
Burkert, P., Trier, F., Afzal Muhammad, Z., Dengel, A., Liwicki, M.: DeXpression: deep convolutional neural network for expression recognition (2015)
Sugaya, M., Watanabe, I., Yoshida, R., Chen, F.: Human emotional state analysis during driving simulation experiment using bio-emotion estimation method. In: 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC) (2018). https://doi.org/10.1109/compsac.2018.10301
Ikeda, Y., Sugaya, M.: Estimate emotion method to use biological, symbolic information preliminary experiment. In: Schmorrow, D.D.D., Fidopiastis, C.M.M. (eds.) AC 2016. LNCS (LNAI), vol. 9743, pp. 332–340. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39955-3_31
Kim, H.G., Cheon, E.J., Bai, D.S., Lee, Y.H., Koo, B.H.: Stress and heart rate variability: a meta-analysis and review of the literature. Psychiatry Invest. 15(3), 235–245 (2018). https://doi.org/10.30773/pi.2017.08.17
Wang, W.: Robust and automatic remote photoplethysmography. Technische Universiteit Eindhoven (2017)
Bobbia, S., Macwan, R., Benezeth, Y., Mansouri, A., Dubois, J.: Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recogn. Lett. 124, 82–90 (2017). https://doi.org/10.1016/j.patrec.2017.10.017
Niu, X., Shan, S., Han, H., Chen, X.: RhythmNet: end-to-end heart rate estimation from face via spatial-temporal representation. IEEE Trans. Image Process. 29, 2409–2423 (2020). https://doi.org/10.1109/tip.2019.2947204
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)
Acknowledgements
This work was partially supported by JSPS KAKENHI Grant Number JP19K20302.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ziaratnia, S., Sripian, P., Laohakangvalvit, T., Ohzeki, K., Sugaya, M. (2021). End-To-End Deep Learning for pNN50 Estimation Using a Spatiotemporal Representation. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Posters. HCII 2021. Communications in Computer and Information Science, vol 1420. Springer, Cham. https://doi.org/10.1007/978-3-030-78642-7_79
Download citation
DOI: https://doi.org/10.1007/978-3-030-78642-7_79
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78641-0
Online ISBN: 978-3-030-78642-7
eBook Packages: Computer ScienceComputer Science (R0)