Nothing Special   »   [go: up one dir, main page]

Skip to main content

An Emotional Expression Monitoring Tool for Facial Videos

  • Conference paper
  • First Online:
Ubiquitous Computing and Ambient Intelligence (UCAmI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10586))

  • 2449 Accesses

Abstract

The proliferation of mobile devices and the ubiquitous nature of cameras today serve to increase the importance of Emotionally Aware Computational Devices. We present a tool to help clinicians and mental health professionals to monitor and assess patients by providing an automated appraisal of a patient’s mood as determined from facial expressions. The App takes video as input from a patient and creates an annotated, configurable record for the clinician as output accessible from mobile devices, Internet or IoT devices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adolphs, R.: Neural systems for recognizing emotion. Curr. Opin. Neurobiol. 12(2), 169–177 (2002)

    Article  Google Scholar 

  2. Barros, P., Jirak, D., Weber, C., Wermter, S.: Multimodal emotional state recognition using sequence-dependent deep hierarchical features. Neural Netw. 72, 140–151 (2015)

    Article  Google Scholar 

  3. Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 568–573. IEEE (2005)

    Google Scholar 

  4. Bengio, Y.: Learning deep architectures for AI. Found. Trends ® Mach. Learn. 2(1), 1–127 (2009)

    Article  MATH  Google Scholar 

  5. Burkert, P., Trier, F., Afzal, M.Z., Dengel, A., Liwicki, M.: Dexpression: deep convolutional neural network for expression recognition. arXiv preprint arXiv:1509.05371 (2015)

  6. Byeon, Y.H., Kwak, K.C.: Facial expression recognition using 3D convolutional neural network. Int. J. Adv. Comput. Sci. Appl. 5(12), 1–8 (2014)

    Google Scholar 

  7. Chen, S., Tian, Y., Liu, Q., Metaxas, D.N.: Recognizing expressions from face and body gesture by temporal normalized motion and appearance features. Image Vis. Comput. 31(2), 175–185 (2013). Affect Analysis in Continuous Input. http://www.sciencedirect.com/science/article/pii/S0262885612001023

    Article  Google Scholar 

  8. Ebrahimi Kahou, S., Michalski, V., Konda, K., Memisevic, R., Pal, C.: Recurrent neural networks for emotion recognition in video. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 467–474. ACM (2015)

    Google Scholar 

  9. Ekman, P., Friesen, W.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124–129 (1971)

    Article  Google Scholar 

  10. Gunes, H., Piccardi, M.: Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans. Syst. Man Cybern. Part B Cybern. 39(1), 64–84 (2009)

    Article  Google Scholar 

  11. Jaiswal, S., Valstar, M.: Deep learning the dynamic appearance and shape of facial action units. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–8. IEEE (2016)

    Google Scholar 

  12. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Levi, G., Hassner, T.: Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI 2015, pp. 503–510. ACM, New York (2015). http://doi.acm.org/10.1145/2818346.2830587

  14. Lunqvist D., F.A., Öhman A.: The Karolinska Directed Emotional Faces - KDEF CD ROM from Department of Clincal Neuroscience (1998)

    Google Scholar 

  15. Mankodiya, K., Sharma, V., Martins, R., Pande, I., Jain, S., Ryan, N., Gandhi, R.: Understanding user’s emotional engagement to the contents on a smartphone display: psychiatric prospective. In: 2013 IEEE 10th International Conference on and 10th International Conference on Autonomic and Trusted Computing (UIC/ATC) Ubiquitous Intelligence and Computing, pp. 631–637. IEEE (2013)

    Google Scholar 

  16. Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)

    Article  Google Scholar 

  17. Morishima, S., Harashima, H.: Facial expression synthesis based on natural voice for virtual face-to-face communication with machine. In: 1993 IEEE Virtual reality annual international symposium, pp. 486–491. IEEE (1993)

    Google Scholar 

  18. Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014)

  19. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  20. Smith, A.: US smartphone use in 2015. http://www.pewinternet.org/2015/04/01/us-smartphone-use-in-2015/

  21. Tokuno, S., Tsumatori, G., Shono, S., Takei, E., Suzuki, G., Yamamoto, T., Mituyoshi, S., Shimura, M.: Usage of emotion recognition in military health care. In: Defense Science Research Conference and Expo (DSR), pp. 1–5. IEEE (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Indrani Mandal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Mandal, I., Ferguson, T., De Pace, G., Mankodiya, K. (2017). An Emotional Expression Monitoring Tool for Facial Videos. In: Ochoa, S., Singh, P., Bravo, J. (eds) Ubiquitous Computing and Ambient Intelligence. UCAmI 2017. Lecture Notes in Computer Science(), vol 10586. Springer, Cham. https://doi.org/10.1007/978-3-319-67585-5_77

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67585-5_77

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67584-8

  • Online ISBN: 978-3-319-67585-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics