Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Body Emotion-Based Human-Robot Interaction

  • Conference paper
  • First Online:
Computer Vision Systems (ICVS 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10528))

Included in the following conference series:

Abstract

In order to achieve reasonable and natural interaction when facing vague human actions, a body emotion-based human-robot interaction (BEHRI) algorithm was developed in this paper. Laban movement analysis and fuzzy logic inference was used to extract the movement emotion and torso pose emotion. A finite state machine model was constructed to describe the paradigm of the robot emotion, and then the interactive strategy was designed to generate suitable interactive behaviors. The algorithm was evaluated on UTD-MHAD, and the overall system was tested via questionnaire. The experimental results indicated that the proposed BEHRI algorithm was able to analyze the body emotion precisely, and the interactive behaviors were accessible and satisfying. BEHRI was shown to have good application potentials.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24(5), 971–981 (2013)

    Article  Google Scholar 

  2. Alonso Martín, F., Ramey, A., Salichs, M.A.: Speaker identification using three signal voice domains during human-robot interaction. In: Proceedings of 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 114–115. ACM (2014)

    Google Scholar 

  3. Chaaraoui, A.A., Padilla-López, J.R., Climent-Pérez, P., Flórez-Revuelta, F.: Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst. Appl. 41(3), 786–794 (2014)

    Article  Google Scholar 

  4. Venkataraman, V., Turaga, P., Lehrer, N., Baran, M., Rikakis, T., Wolf, S.L.: Attractor-shape for dynamical analysis of human movement: applications in stroke rehabilitation and action recognition. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 514–520. IEEE Press (2013)

    Google Scholar 

  5. Siddiqi, M.H., Ali, R., Khan, A.M., Park, Y.-T., Lee, S.: Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE T Image Process 24(4), 1386–1398 (2015)

    Article  MathSciNet  Google Scholar 

  6. Yildiz, I.B., von Kriegstein, K., Kiebel, S.J.: From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Comput. Biol. 9(9), 1–16 (2013)

    Article  Google Scholar 

  7. Chatterjee, M., Peng, S.-C.: Processing F0 with cochlear implants: modulation frequency discrimination and speech intonation recognition. Hear. Res. 235(1), 143–156 (2008)

    Article  Google Scholar 

  8. Lichtenstern, M., Frassl, M., Perun, B., Angermann, M.: A prototyping environment for interaction between a human and a robotic multi-agent system. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 185–186. IEEE Press (2012)

    Google Scholar 

  9. Yamada, T., Murata, S., Arie, H., Ogata, T.: Dynamical integration of language and behavior in a recurrent neural network for human-robot interaction. Front. Neurorobot. 10(5), 1–17 (2016)

    Google Scholar 

  10. Palm, R., Chadalavada, R., Lilienthal, A.: Fuzzy modeling and control for intention recognition in human-robot systems. In: 8th International Conference on Computational Intelligence (IJCCI), Porto, Portugal, pp. 67–74. SciTePress (2016)

    Google Scholar 

  11. Liu, P., Glas, D.F., Kanda, T., Ishiguro, H.: Data-driven HRI: learning social behaviors by example from human-human interaction. IEEE Trans. Robot. 32(4), 988–1008 (2016)

    Article  Google Scholar 

  12. Bohus, D., Horvitz, E.: Managing human-robot engagement with forecasts and… um… hesitations. In: Proceedings of 16th International Conference on Multimodal Interaction, pp. 2–9. ACM (2014)

    Google Scholar 

  13. Aly, A., Tapus, A.: A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction. In: Proceedings of 8th ACM/IEEE International Conference on Human-Robot Interaction, pp. 325–332. IEEE Press (2013)

    Google Scholar 

  14. Liu, Z., Wu, M., Li, D., Chen, L., Dong, F., Yamazaki, Y., Hirota, K.: Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots. J. Automat. Mob. Robot. Intell. Syst. 7(2), 52–63 (2013)

    Google Scholar 

  15. Dautenhahn, K.: Socially intelligent robots: dimensions of human–robot interaction. Philos. Trans. Roy. Soc. Lond. B 362(1480), 679–704 (2007)

    Article  Google Scholar 

  16. Laban, R.: The Language of Movement: A Guidebook to Choreutics. Plays, Boston (1974)

    Google Scholar 

  17. Hsieh, C., Wang, Y.: Digitalize emotions to improve the quality life-analyzing movement for emotion application. J. Aesthet. Educ. 168, 64–69 (2009)

    Google Scholar 

  18. Ku, M.-S., Chen, Y.: From movement to emotion - a basic research of upper body (analysis foundation of body movement in the digital world 3 of 3). J. Aesthet. Educ. 164, 38–43 (2008)

    Google Scholar 

  19. Kinect - Windows App Development. https://developer.microsoft.com/en-us/windows/kinect

  20. Xia, G., Tay, J., Dannenberg, R., Veloso, M.: Autonomous robot dancing driven by beats and emotions of music. In: Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 205–212. International Foundation for Autonomous Agents and Multiagent Systems (2012)

    Google Scholar 

  21. Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 168–172. IEEE Press (2015)

    Google Scholar 

  22. Nao Robot: Characteristics - Aldebaran. https://www.ald.softbankrobotics.com/en/cool-robots/nao/find-out-more-about-nao

Download references

Acknowledgements

This work has received funding from the Major Research plan of the National Natural Science Foundation of China (91646205), and the National Natural Science Foundation of China (61403368).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Xiong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zhu, T., Zhao, Q., Xiong, J. (2017). A Body Emotion-Based Human-Robot Interaction. In: Liu, M., Chen, H., Vincze, M. (eds) Computer Vision Systems. ICVS 2017. Lecture Notes in Computer Science(), vol 10528. Springer, Cham. https://doi.org/10.1007/978-3-319-68345-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68345-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68344-7

  • Online ISBN: 978-3-319-68345-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics