Abstract
The proliferation of mobile devices and the ubiquitous nature of cameras today serve to increase the importance of Emotionally Aware Computational Devices. We present a tool to help clinicians and mental health professionals to monitor and assess patients by providing an automated appraisal of a patient’s mood as determined from facial expressions. The App takes video as input from a patient and creates an annotated, configurable record for the clinician as output accessible from mobile devices, Internet or IoT devices.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adolphs, R.: Neural systems for recognizing emotion. Curr. Opin. Neurobiol. 12(2), 169–177 (2002)
Barros, P., Jirak, D., Weber, C., Wermter, S.: Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Netw. 72, 140–151 (2015)
Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 568–573. IEEE (2005)
Bengio, Y.: Learning deep architectures for AI. Found. Trends ® Mach. Learn. 2(1), 1–127 (2009)
Burkert, P., Trier, F., Afzal, M.Z., Dengel, A., Liwicki, M.: Dexpression: deep convolutional neural network for expression recognition. arXiv preprint arXiv:1509.05371 (2015)
Byeon, Y.H., Kwak, K.C.: Facial expression recognition using 3D convolutional neural network. Int. J. Adv. Comput. Sci. Appl. 5(12), 1–8 (2014)
Chen, S., Tian, Y., Liu, Q., Metaxas, D.N.: Recognizing expressions from face and body gesture by temporal normalized motion and appearance features. Image Vis. Comput. 31(2), 175–185 (2013). Affect Analysis in Continuous Input. http://www.sciencedirect.com/science/article/pii/S0262885612001023
Ebrahimi Kahou, S., Michalski, V., Konda, K., Memisevic, R., Pal, C.: Recurrent neural networks for emotion recognition in video. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 467–474. ACM (2015)
Ekman, P., Friesen, W.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124–129 (1971)
Gunes, H., Piccardi, M.: Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans. Syst. Man Cybern. Part B Cybern. 39(1), 64–84 (2009)
Jaiswal, S., Valstar, M.: Deep learning the dynamic appearance and shape of facial action units. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–8. IEEE (2016)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI 2015, pp. 503–510. ACM, New York (2015). http://doi.acm.org/10.1145/2818346.2830587
Lunqvist D., F.A., Öhman A.: The Karolinska Directed Emotional Faces - KDEF CD ROM from Department of Clincal Neuroscience (1998)
Mankodiya, K., Sharma, V., Martins, R., Pande, I., Jain, S., Ryan, N., Gandhi, R.: Understanding user’s emotional engagement to the contents on a smartphone display: psychiatric prospective. In: 2013 IEEE 10th International Conference on and 10th International Conference on Autonomic and Trusted Computing (UIC/ATC) Ubiquitous Intelligence and Computing, pp. 631–637. IEEE (2013)
Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)
Morishima, S., Harashima, H.: Facial expression synthesis based on natural voice for virtual face-to-face communication with machine. In: 1993 IEEE Virtual reality annual international symposium, pp. 486–491. IEEE (1993)
Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
Smith, A.: US smartphone use in 2015. http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/
Tokuno, S., Tsumatori, G., Shono, S., Takei, E., Suzuki, G., Yamamoto, T., Mituyoshi, S., Shimura, M.: Usage of emotion recognition in military health care. In: Defense Science Research Conference and Expo (DSR), pp. 1–5. IEEE (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Mandal, I., Ferguson, T., De Pace, G., Mankodiya, K. (2017). An Emotional Expression Monitoring Tool for Facial Videos. In: Ochoa, S., Singh, P., Bravo, J. (eds) Ubiquitous Computing and Ambient Intelligence. UCAmI 2017. Lecture Notes in Computer Science(), vol 10586. Springer, Cham. https://doi.org/10.1007/978-3-319-67585-5_77
Download citation
DOI: https://doi.org/10.1007/978-3-319-67585-5_77
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67584-8
Online ISBN: 978-3-319-67585-5
eBook Packages: Computer ScienceComputer Science (R0)