Abstract
Many kinds of interaction schemes for human-robot interaction (HRI) have been reported in recent years. However, most of these schemes are realized by recognizing the human actions. Once the recognition algorithm fails, the robot’s reactions will not be able to proceed further. This issue is thoughtless in traditional HRI, but is the key point to further improve the fluency and friendliness of HRI. In this work, a sociable HRI (SoHRI) scheme based on body emotion analysis was developed to achieve reasonable and natural interaction while human actions were not recognized. First, the emotions from the dynamic movements and static poses of humans were quantified using Laban movement analysis. Second, an interaction strategy including a finite state machine model was designed to describe the transition regulations of the human emotion state. Finally, appropriate interactive behavior of the robot was selected according to the inferred human emotion state. The quantification effect of SoHRI was verified using the dataset UTD-MHAD, and the whole scheme was tested using questionnaires filled out by the participants and spectators. The experimental results showed that the SoHRI scheme can analyze the body emotion precisely, and help the robot make reasonable interactive behaviors.
Similar content being viewed by others
References
K. K. Reddy and M. Shah, “Recognizing 50 human action categories of web videos,” Machine Vision and Applications, vol. 24, no. 5, pp. 971–981, June 2013.
M. M. Ullah and I. Laptev, “Actlets: A novel local representation for human action recognition in video,” Proc. of 19th IEEE International Conference on Image Processing, pp. 777–780, 2012.
F. Alonso Martín, A. Ramey, and M. A. Salichs, “Speaker identification using three signal voice domains during human–robot interaction,” Proc. of the ACM/IEEE International Conference on Human–robot Interaction, pp. 114–115, 2014.
A. A. Chaaraoui, J. R. Padilla–López, P. Climent–Pérez, and F. Flórez–Revuelta, “Evolutionary joint selection to improve human action recognition with RGB–D devices,” Expert Systems with Applications, vol. 41, no. 3, pp. 786–794, February 2014.
J. Wang, Z. Liu, and Y. Wu, “Learning actionlet ensemble for 3D human action recognition,” Human Action Recognition with Depth Cameras, Springer, pp. 11–40, January 2014.
C. Chen, K. Liu, and N. Kehtarnavaz, “Real–time human action recognition based on depth motion maps,” Journal of Real–time Image Processing, vol. 12, no. 1, pp. 155–163, June 2016.
V. Venkataraman, P. Turaga, N. Lehrer, M. Baran, T. Rikakis, and S. L. Wolf, “Attractor–shape for dynamical analysis of human movement: applications in stroke rehabilitation and action recognition,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 514–520, 2013.
F. G. Da Silva, and E. Galeazzo, “Accelerometer based intelligent system for human movement recognition,” Proc. of 5th IEEE International Workshop on Advances in Sensors and Interfaces (IWASI), pp. 20–24, 2013.
M. H. Siddiqi, R. Ali, A. M. Khan, Y. T. Park, and S. Lee, “Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields,” IEEE Transactions on Image Processing, vol. 24, no. 4, pp. 1386–1398, February 2015.
I. B. Yildiz, K. Von Kriegstein, and S. J. Kiebel, “From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems,” PLoS Comput Biol, vol. 9. no. 9, pp. e1003219, September 2013.
M. Chatterjee and S.–C. Peng, “Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition,” Hearing Research, vol. 235, no. 1, pp. 143–156, January 2008.
M. Lichtenstern, M. Frassl, B. Perun, and M. Angermann, “A prototyping environment for interaction between a human and a robotic multi–agent system,” Proc. of 7th ACM/IEEE International Conference on Human–Robot Interaction (HRI), pp. 185–186, 2012.
T. Yamada, S. Murata, H. Arie, and T. Ogata, “Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction,” Frontiers in Neurorobotics, vol. 10, no. 11, pp. 6014–17, July 2016.
M. Farhad, S. N. Hossain, A. S. Khan, and A. Islam, “An efficient optical character recognition algorithm using artificial neural network by curvature properties of characters,” Proc. of International Conference on Informatics, Electronics & Vision (ICIEV), pp. 1–5, 2014.
R. Palm, R. Chadalavada, and A. Lilienthal, “Fuzzy modeling and control for intention recognition in human–robot systems,” Proc. of 8th International Conference on Computational Intelligence IJCCI 2016. FCTA, Porto, Portugal, pp. 67–74, 2016.
C. R. Guerrero, J. C. F. Marinero, J. P. Turiel, and V. Muõz, “Using ‘human state aware’ robots to enhance physical human–robot interaction in a cooperative scenario,” Computer Methods and Programs in Biomedicine, vol. 112, no. 2, pp. 250–259, November 2013.
P. Liu, D. F. Glas, T. Kanda, and H. Ishiguro, “Data–driven HRI: learning social behaviors by example from humanhuman interaction,” IEEE Transactions on Robotics, vol. 32, no. 4, pp. 988–1008, August 2016.
D. Bohus and E. Horvitz, “Managing human–robot engagement with forecasts and... um... hesitations,” Proceedings of the 16th International Conference on Multimodal Interaction, pp. 2–9, 2014.
A. Aly and A. Tapus, “A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human–robot interaction,” Proceedings of the 8th ACM/IEEE International Conference on Human–robot Interaction, pp. 325–332, 2013.
D. Glowinski, A. Camurri, G. Volpe, N. Dael, and K. Scherer, “Technique for automatic emotion recognition by body gesture analysis,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW’08, pp. 1–6, 2008.
Z. Liu, M. Wu, D. Li, L. Chen, F. Dong, Y. Yamazaki, and K. Hirota, “Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots,” Journal of Automation Mobile Robotics and Intelligent Systems, vol. 7, no. 2, pp. 52–63, June 2013.
W. H. Kim, J. W. Park, W. H. Lee, H. S. Lee, and M. J. Chung, “LMA based emotional motion representation using RGB–D camera,” Proceedings of the 8th ACM/IEEE International Conference on Human–robot Interaction, pp. 163–164, 2013.
A. Robotics, “Nao robot: characteristics–Aldebaran,” https://www.ald.softbankrobotics.com/en/coolrobots/nao/find–out–more–about–nao.
R. Laban, The Language of Movement: A Guidebook to Choreutics, Plays Inc, Boston, 1974.
Y. Cheng, A Study on Semantic and Emotional Messages in Robot Movements, Department of Multimedia Design, National Taichung Institute of Technology, Taichung, 2010.
Y. Juan, Motion Style Synthesis Based on Laban Movement Analysis, Institude of Information Systems and Applications, National Tsing Hua University, Hsinchu, 2004.
C. Hsieh and Y. Wang, “Digitalize emotions to improve the quality life–analyzing movement for emotion application,” Journal of Aesthetic Education, vol. 168, pp. 64–69, 2009.
M. S. Ku and Y. Chen, “From movement to emotion–a basic research of upper body (analysis foundation of body movement in the digital world 3 of 3),” Journal of Aesthetic Education, vol. 164, pp. 38–43, 2008.
R. C. Gonzalez and R. E. Wood, “Using fuzzy techniques for intensity,” Digital Image Processing, 3 ed., Prentice Hall, pp. 128, 2008.
I. Asimov, “Runaround,” Astounding Science Fiction, vol. 29, no. 1, pp. 94–103, March 1942.
E. Fosch Villaronga, A. Barco, B. Zcan, and J. Shukla, “An interdisciplinary approach to improving cognitive humanrobot interaction–a novel emotion–based model,” What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. pp. 195–205, October 2016.
M. Giuliani, C. Lenz, T. Müller, M. Rickert, and A. Knoll, “Design principles for safety in human–robot interaction,” International Journal of Social Robotics, vol. 2, no. 3, pp. 253–274, March 2010.
G. Xia, J. Tay, R. Dannenberg, and M. Veloso, “Autonomous robot dancing driven by beats and emotions of music,” Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems–Volume 1, pp. 205–212, 2012.
C. Chen, R. Jafari and N. Kehtarnavaz, “UTD–MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,” Proc. of IEEE International Conference on Image Processing (ICIP), pp. 168–172, 2015.
G. Castellano, S. D. Villalba, and A. Camurri, “Recognising human emotions from body movement and gesture dynamics,” Proc. of International Conference on Affective Computing and Intelligent Interaction, pp. 71–82, 2007.
B. Kikhia, M. Gomez, L. L. Jiménez, J. Hallberg, N. Karvonen, and K. Synnes, “Analyzing body movements within the laban effort framework using a single accelerometer,” Sensors, vol. 14, no. 3, pp. 5725–5741, March 2014.
Author information
Authors and Affiliations
Corresponding author
Additional information
Recommended by Associate Editor Myung Geun Chun under the direction of Editor Euntai Kim. This work was supported by National Natural Science Foundation of China (No.61773365), Major Research Plan of the National Natural Science Foundation of China (No. 91646205), Major Project of Guangdong Province Science and Technology Department (No. 2014B090919002), Shenzhen Research Project (No. GJHS20160331190459402).
Tehao Zhu received the B.S. degree in automation from the Northwest Polytechnical University, Xi’an, China, in 2009, and the M.S. degree in pattern recognition and intelligent system from University of Science and Technology of China, Hefei, China, in 2012. He is currently pursuing a Ph.D. degree at Shanghai Jiao Tong University, Shanghai, China. His current research interests include human-robot interaction, machine learning, and image processing.
Zeyang Xia received the B.S. degree in mechanical engineering from Shanghai Jiao Tong University, Shanghai, China, in 2002, and the Ph.D. degree in mechanical engineering from Tsinghua University, Beijing, China, in 2008. He is currently a Professor at Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, and is the director of Medical Robotics and Biomechanics Laboratory (http://www.bigsmilelab.ac.cn). His research interests include biped humanoid robotics, medical robotics, and dental biomechanics. He has published over 80 peer reviewed papers, and applied over 40 patents. He is the vice chairman of Guangzhou Branch of the Youth Innovation Promotion Association, Chinese Academy of Sciences, and the co-chair of Guangdong Chapter of IEEE Robotics and Automation Society. He served as the Program Co-Chair of IEEE RCAR 2016 and ICVS 2017, and will be the General Chair of IEEE RCAR 2019.
Jiaqi Dong received the B.S. degree in automation from Shanghai Jiao Tong University, Shanghai, China, in 2014. She is currently pursuing a Ph.D. degree at Shanghai Jiao Tong University, Shanghai, China. Her current research interests include human-robot interaction and pattern recognition.
Qunfei Zhao received the B.S.E.E. degree from Xi’an Jiao Tong University, Xi’an, China, in 1982, and the Sc.D. degree in system science from Tokyo Institute of Technology, Tokyo, Japan, in 1988. He is currently a Professor at the School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, China. His research interests include robotics, machine vision, and optimal control of complex mechatronic systems.
Rights and permissions
About this article
Cite this article
Zhu, T., Xia, Z., Dong, J. et al. A Sociable Human-robot Interaction Scheme Based on Body Emotion Analysis. Int. J. Control Autom. Syst. 17, 474–485 (2019). https://doi.org/10.1007/s12555-017-0423-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-017-0423-5