Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Multi-robot behavior adaptation to local and global communication atmosphere in humans-robots interaction

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

Multi-robot behavior adaptation mechanism based on cooperative–neutral–competitive fuzzy-Q learning is proposed for coordinating local communication atmospheres in humans-robots interaction, in which the communication atmosphere is represented by a two-layer fuzzy fusion model and is visualized by shape–color–fill–wave graphics. It aims to realize smooth communication between humans and robots in the local and global communication atmosphere coexistent interaction by decreasing the response time of robots and social distance between humans and robots, as well as visualizing the communication atmosphere. Experiments on multi-robot behavior adaptation are performed in a virtual home party environment. Results show that the proposal saves 47 and 103 learning steps (i.e., the learning rate is increased by 72 % and 85 %) compared to fuzzy production rule based friend-Q learning (FPRFQ) and friend-Q learning (FQ), respectively; the distance between human-generated atmosphere and robot-generated atmosphere is 3 times and 7 times shorter than the FPRFQ and the FQ, respectively. Additionally, subjective estimation of graphic visualization of the atmosphere through questionnaire obtains 85.5 % accuracy for shape, 77.2 % for color, 65.3 % for fill, and 91.7 % for wave. The proposed mechanism is being extended to the robot behavior adaptation to international communication atmosphere, where the atmosphere is generated by people from different countries with different cultural backgrounds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Murphy RR, Nomura T, Billard A, Burke JL (2010) Human-robot interaction. IEEE Robot Autom Mag 17(2):85–89

    Article  Google Scholar 

  2. Makino T, Aihara K (2006) Multi-agent reinforcement learning algorithm to handle beliefs of other agents’ policies and embedded beliefs. In: Proceedings of the fifth international joint conference on autonomous agents and multiagent systems. Hokkaido, Japan, pp 789–791

  3. Ikemoto S, Amor HB, Minato T, Jung B, Ishiguro H (2012) Physical human-robot interaction: mutual learning and adaptation. IEEE Robot Autom Mag 19(4):24–35

    Article  Google Scholar 

  4. Foster ME, Keizer S, Wang Z, Lemon O (2012) Machine learning of social states and skills for multi-party human-robot interaction. In: Proceedings of the ECAI workshop on machine learing for interactive systems: bridging the gap beteween language, motor control and vision. France, Montpellier, pp 9–11

  5. Brezeal C (2004) Social interactions in HRI: the robot view. IEEE Trans Syst Man Cybern 34(2):181–186

    Article  Google Scholar 

  6. Sargn ME, Yemez Y, Erzin E, Tekalp AM (2008) Analysis of headgesture and prosody patterns for prosody-driven head-gesture animation. IEEE Trans Pattern Anal Mach Intell 30(8):1330–1345

    Article  Google Scholar 

  7. Chutorian EM, Trivedi MM (2009) Head pose estimation in computer vision: a survey. IEEE Trans Pattern Anal Mach Intell 31(4):607–626

    Article  Google Scholar 

  8. Mitsunaga N, Smith C, Kanda T, Ishiguro H, Hagita N (2008) Adapting robot behavior for human-robot interaction. IEEE Trans Robot 24(4):911–916

    Article  Google Scholar 

  9. Isbell C, Shelton CR, Kearns M, Singh S, Stone P (2001) A social reinforcement learning agent. In: Proceedings of the fifth international conference on autonomous agents. Hakodate, Japan, pp 377–384

  10. Tao J, Tan T (2005) Affective computing: a review. Affective Computing and Intelligent Interaction. Spriger, Berlin

    Google Scholar 

  11. Picard RW (2003) Affective computing: challenges. Int J Hum-Comput Stud 59(1–2):55–64

    Article  Google Scholar 

  12. Shi Z, Wei J, Wang Z, Tu J (2012) Affective transfer cpmputing model based on attenuation emotion mechanism. J Multimodal User Interfaces 5(1–2):3–18

    Article  Google Scholar 

  13. Lee KW, Kim HR, Yoon WC, Yoon YS, Kwon DS (2005) Designing a human-robot interaction framework for home service robot. In: IEEE international workshop on robots and human interactive communication. Roman, Italy, pp 286–293

  14. Park S, Moshkina L, Arkin RC (2010) Mood as an affective component for robotic behavior with continuous adaptation via learning momentum. In: IEEE-RAS international conference on humanoid robots. Nashville, USA, pp 340–345

  15. Tews AD, Mataric MJ, Sukhatme GS (2003) Scaling high level interactions between humans and robots. In: Proceedings of AAAI spring symposium on human interaction with autonomous systems in complex environments, Stanford, USA, pp 196–202

  16. Liu Z-T, Wu M, Li D-Y, Chen L-F, Dong F-Y, Yamazaki Y, Hirota K (2013) Concept of fuzzy atmosfield for representing communication atmosphere and its application to humans-robots interaction. J Adv Comput Intell Intell Inform 17(1):3–17

    Google Scholar 

  17. Liu Z-T, Wu M, Li D-Y, Chen L-F, Dong F-Y, Yamazaki Y, Hirota K (2013) Communication atmosphere in humans robots interaction based on concept of fuzzy atmosfield generated by emotional states of humans and robots. J Autom Mob Robot Interll Syst 7(2):52–63

    Google Scholar 

  18. Chen L-F, Liu Z-T, Dong F-Y, Yamazaki Y, Wu M, Hirota K (2013) Adapting multi-robot behavior to communication atmosphere in humans-robots interaction using fuzzy production rule based friend-Q learning. J Adv Comput Intell Intell Inform 17(2):291–301

    Google Scholar 

  19. Eyal E-D, Yishay M (2003) Learning rates for Q-learning. J Mach Learn Res 5:1–25

    MathSciNet  Google Scholar 

  20. Filar J, Vrieze K (1997) Competitive Markov decision processes. Springer, Berlin

    MATH  Google Scholar 

  21. Littman ML (1994) Markov games as a framework for multiagent reinforcement learning. In: Proceedings of the 11th international conference on machine learning, New Brunswick, USA, pp 157–163

  22. Hu J, Wellman MP (2003) Nash Q-learning for general-sum stochastic games. J Mach Learn Res 4:1039–1069

    MathSciNet  Google Scholar 

  23. Littman ML (2001) Friend-or-Foe Q-learning in general-sum games. In: Proceedings of 18th international conference on machine learning. Massachusetts, USA, pp 322–328

  24. Watkins CJH, Dayan P (1992) Q-learning. Mach Learn 8(3–4):279–292

    MATH  Google Scholar 

  25. Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: a survey. J Artif Intell Res 4:237–285

    Google Scholar 

  26. Singh S, Jaakkola T, Littman M, Szepesvari C (2000) Convergence results for single-step on-policy reinforcement- learning algorithms. Mach Learn 38(3):287–308

    Article  MATH  Google Scholar 

  27. Fong T, Nourbakhs I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3–4):143–166

    Article  MATH  Google Scholar 

  28. Karakayali N (2009) Social distance and affective orientations. Sociol Forum 24(3):538–562

    Article  Google Scholar 

  29. Yamazaki Y, Hatakeyama Y, Dong F-Y, Nomoto K, Hirota K (2008) Fuzzy inference based mentality expression for eye robot in affinity pleasure-arousal space. J Adv Comput Intell Intell Inform 12(3):304–313

    Google Scholar 

  30. Mitssunaga N, Miyashita T, Ishiguro H, Kogure K, Hagita N (2006) Robovie-IV: a communication robot interacting with people daily in an office. In: Proceedings of international conference on intelligent robots and system. Beijing, China, pp 5066–5072

  31. Vu HA, Yamazaki Y, Dong F-Y, Hirota K (2011) Emotion recognition based on human gesture and speech information using RT middleware. In: IEEE international conference on fuzzy systems. Taipei, Taiwan, pp 787–791

  32. Tang Y-K, Vu HA et al (2011) Multimodal gesture recognition for mascot robot system based on choquet integral using camera and 3D accelerometers fusion. J Adv Comput Intell Intell Inform 5(5):563–572

    Google Scholar 

  33. Dimoney PF, Warneken F (2011) The basis of shared intentions in human and robot cognition. New Ideas Psychol 29(3):260–274

  34. Yang E, Gu D (2004) Multiagent reinforcement learning for multi-robot systems: a survey. Technical Report CSM-404, Dept. of Computer Science, University of Essex

  35. Kormushev P, Kormushev S, Caldwell DG (2013) Reinforcement learning in robotics: applications and real-world vhallenges. Robotics 2:122–148

    Article  Google Scholar 

  36. Guerrero NN, Weber C, Schroeter P, Wermter S (2012) Real-world reinforcement learning for autonomous humanoid robot docking. Robot Auton Syst 60(11):1400–1407

  37. Hester T, Quinlan M, Stone P (2010) Generalized model learning for reinforcement learning on a humanoid robot. In: IEEE international conference on robotics and automation anchorage, USA, pp 2369–2374

Download references

Acknowledgments

The authors wish to thank the reviewers for valuable suggestions that improved the quality of this paper. They also wish to thank Min Ding, Fei Yan, Jiajun Lu, and Maslina Binti Zolkepli for their help with the paper’s revision. This work was supported by the National Natural Science Foundation of China under Grant 61210011.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lue-Feng Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, LF., Liu, ZT., Wu, M. et al. Multi-robot behavior adaptation to local and global communication atmosphere in humans-robots interaction. J Multimodal User Interfaces 8, 289–303 (2014). https://doi.org/10.1007/s12193-014-0156-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-014-0156-1

Keywords

Navigation