Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Learning to Engage with Interactive Systems: A Field Study on Deep Reinforcement Learning in a Public Museum

Published: 20 October 2020 Publication History

Abstract

Physical agents that can autonomously generate engaging, life-like behavior will lead to more responsive and user-friendly robots and other autonomous systems. Although many advances have been made for one-to-one interactions in well-controlled settings, physical agents should be capable of interacting with humans in natural settings, including group interaction. To generate engaging behaviors, the autonomous system must first be able to estimate its human partners’ engagement level. In this article, we propose an approach for estimating engagement during group interaction by simultaneously taking into account active and passive interaction, and use the measure as the reward signal within a reinforcement learning framework to learn engaging interactive behaviors. The proposed approach is implemented in an interactive sculptural system in a museum setting. We compare the learning system to a baseline using pre-scripted interactive behaviors. Analysis based on sensory data and survey data shows that adaptable behaviors within an expert-designed action space can achieve higher engagement and likeability.

References

[1]
Salvatore M. Anzalone, Sofiane Boucenna, Serena Ivaldi, and Mohamed Chetouani. 2015. Evaluating the engagement with social robots. International Journal of Social Robotics 7, 4 (Aug. 2015), 465--478.
[2]
David Arthur and Sergei Vassilvitskii. 2007. k-means++: The advantages of careful seeding. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms. 1027--1035.
[3]
Karl J. Åström and Björn Wittenmark. 2013. Adaptive Control. Courier Corporation.
[4]
Federico Augugliaro, Angela P. Schoellig, and Raffaello D’Andrea. 2013. Dance of the flying machines: Methods for designing and executing an aerial dance choreography. IEEE Robotics Automation Magazine 20, 4 (Dec. 2013), 96--104.
[5]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1, 1 (Jan. 2009), 71--81.
[6]
Philip Beesley, Matthew Chan, Rob Gorbet, Dana Kulić, and Mo Memarian. 2015. Evolving systems within immersive architectural environments: New research by the Living Architecture Systems Group. Next Generation Building 2 (2015), 31--56.
[7]
Philip Beesley, Pernilla Ohrstedt, and Rob Gorbet. 2010. Hylozoic Ground: Liminal Responsive Architecture: Philip Beesley. Riverside Architectural Press. https://books.google.ca/books?id=Ad6gAQAACAAJ.
[8]
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. 2012. The arcade learning environment: An evaluation platform for general agents. arXiv:1207.4708. http://arxiv.org/abs/1207.4708.
[9]
Dan Bohus and Eric Horvitz. 2009. Models for multiparty engagement in open-world dialog. In Proceedings of the SIGDIAL 2009 Conference. 225--234. http://aclweb.org/anthology/W09-3933.
[10]
Dan Bohus and Eric Horvitz. 2009. Open-World Dialog: Challenges, Directions, and a Prototype. Technical Report. Microsoft. https://www.microsoft.com/en-us/research/publication/open-world-dialog-challenges-directions-and-a-prototype/.
[11]
Dan Bohus and Eric Horvitz. 2014. Managing human-robot engagement with forecasts and... um... hesitations. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI’14). ACM, New York, NY, 2--9.
[12]
Cynthia Breazeal, Kerstin Dautenhahn, and Takayuki Kanda. 2016. Social Robotics. Springer International, Cham, Switzerland 1935--1972.
[13]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv:1606.01540. http://arxiv.org/abs/1606.01540.
[14]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2016. Realtime multi-person 2D pose estimation using part affinity fields. arXiv:1611.08050 http://arxiv.org/abs/1611.08050.
[15]
Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, and Peter W. McOwan. 2009. Detecting user engagement with a robot companion using task and social interaction-based features. In Proceedings of the 2009 International Conference on Multimodal Interfaces (ICMI-MLMI’09). ACM, New York, NY, 119--126.
[16]
Matthew T. K. Chan, Rob Gorbet, Philip Beesley, and Dana Kulić. 2015. Curiosity-based learning algorithm for distributed interactive sculptural systems. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’15). IEEE, Los Alamitos, CA, 3435--3441.
[17]
Matthew T. K. Chan, Rob Gorbet, Philip Beesley, and Dana Kulić. 2016. Interacting with curious agents: User experience with interactive sculptural systems. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, Los Alamitos, CA, 151--158.
[18]
Yu Fan Chen, Michael Everett, Miao Liu, and Jonathan P. How. 2017. Socially aware motion planning with deep reinforcement learning. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’17). IEEE, Los Alamitos, CA, 1343--1350.
[19]
Michael Jae-Yoon Chung and Maya Cakmak. 2018. “How was your stay?”: Exploring the use of robots for gathering customer feedback in the hospitality industry. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’18). 947--954.
[20]
Catie Cuan, Ishaan Pakrasi, and Amy LaViers. 2018. Time to compile. In Proceedings of the 5th International Conference on Movement and Computing (MOCO’18). ACM, New York, NY, Article 53, 4 pages.
[21]
Robert F. DeVellis. 2016. Scale Development: Theory and Applications. Vol. 26. Sage Publications.
[22]
Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. 2017. OpenAI Baselines. Retrieved August 1, 2020 from https://github.com/openai/baselines.
[23]
Kevin Doherty and Gavin Doherty. 2018. Engagement in HCI: Conception, theory and measurement. ACM Computing Surveys 51, 5 (Nov. 2018), Article 99, 39 pages.
[24]
Scott Fujimoto, Herke van Hoof, and David Meger. 2018. Addressing function approximation error in actor-critic methods. arXiv:1802.09477.
[25]
Nadine Glas and Catherine Pelachaud. 2015. Definitions of engagement in human-agent interaction. In Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII’15). 944--949.
[26]
Michael A. Goodrich and Alan C. Schultz. 2007. Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction 1, 3 (Jan. 2007), 203--275.
[27]
Goren Gordon, Samuel Spaulding, Jacqueline Kory Westlund, Jin Joo Lee, Luke Plummer, Marayna Martinez, Madhurima Das, and Cynthia Breazeal. 2016. Affective personalization of a social robot tutor for children’s second language skills. In Proceedings of the 30th AAAI Conference on Artificial Intelligence.
[28]
Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea Thomaz. 2013. Policy shaping: Integrating human feedback with reinforcement learning. In Proceedings of the 26th International Conference on Neural Information Processing Systems—Volume 2 (NIPS’13). 2625--2633. http://dl.acm.org/citation.cfm?id=2999792.2999905.
[29]
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv:1801.01290.
[30]
Sami Haddadin and Elizabeth Croft. 2016. Physical Human--Robot Interaction. Springer International, Cham, Switzerland, 1835--1874.
[31]
Kotaro Hayashi, Daisuke Sakamoto, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, and Norihiro Hagita. 2007. Humanoid robots as a passive-social medium—A field experiment at a train station. In Proceedings of the 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI’07). IEEE, Los Alamitos, CA, 137--144.
[32]
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
[33]
Charles Isbell, Christian R. Shelton, Michael Kearns, Satinder Singh, and Peter Stone. 2001. A social reinforcement learning agent. In Proceedings of the 5th International Conference on Autonomous Agents (AGENTS’01). ACM, New York, NY, 377--384.
[34]
Serena Ivaldi, Sébastien Lefort, Jan Peters, Mohamed Chetouani, Joelle Provasi, and Elisabetta Zibetti. 2015. Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task. arXiv:1508.04603. http://arxiv.org/abs/1508.04603
[35]
Michiel Joosse and Vanessa Evers. 2017. A guide robot at the airport: First impressions. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17). ACM, New York, NY, 149--150.
[36]
Kirsikka Kaipainen, Aino Ahtinen, and Aleksi Hiltunen. 2018. Nice surprise, more present than a machine: Experiences evoked by a social robot for guidance and edutainment at a city service point. In Proceedings of the 22nd International Academic Mindtrek Conference (Mindtrek’18). ACM, New York, NY, 163--171.
[37]
Takayuki Kanda and Hiroshi Ishiguro. 2012. Human-Robotic Interaction in Social Robotics. Boca Raton, FL.
[38]
Simon Keizer, Mary Ellen Foster, Zhuoran Wang, and Oliver Lemon. 2014. Machine learning for social multiparty human--robot interaction. ACM Transactions on Interactive Intelligent Systems 4, 3 (2014), 14.
[39]
Mehdi Khamassi, George Velentzas, Theodore Tsitsimis, and Costas Tzafestas. 2018. Robot fast adaptation to changes in human engagement during simulated dynamic social interaction with active exploration in parameterized reinforcement learning. IEEE Transactions on Cognitive and Developmental Systems 10, 4 (2018), 881--893.
[40]
Rajiv Khosla, Khanh Nguyen, and Mei-Tai Chu. 2017. Human robot engagement and acceptability in residential aged care. International Journal of Human--Computer Interaction 33, 6 (2017), 510--522.
[41]
W. Bradley Knox and Peter Stone. 2010. Combining manual feedback with subsequent MDP reward signals for reinforcement learning. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’10).
[42]
W. Bradley Knox and Peter Stone. 2012. Reinforcement learning from simultaneous human and MDP reward. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems—Volume 1 (AAMAS’12). 475--482. http://dl.acm.org/citation.cfm?id=2343576.2343644.
[43]
Samantha Krening and Karen M. Feigh. 2018. Interaction algorithm effect on human experience with reinforcement learning. ACM Transactions on Human-Robot Interaction 7, 2 (Oct. 2018), Article 16, 22 pages.
[44]
Dana Kulic and Elizabeth A. Croft. 2007. Affective state estimation for human--robot interaction. IEEE Transactions on Robotics 23, 5 (2007), 991--1000.
[45]
Kazumi Kumagai, Daiwei Lin, Lingheng Meng, Alexandru Blidaru, Philip Beesley, Dana Kulić, and Ikuo Mizuuchi. 2018. Towards individualized affective human-machine interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication. IEEE, Los Alamitos, CA, 678--685.
[46]
Iolanda Leite, André Pereira, Ginevra Castellano, Samuel Mascarenhas, Carlos Martinho, and Ana Paiva. 2011. Modelling empathy in social robotic companions. In Proceedings of the International Conference on User Modeling, Adaptation, and Personalization. 135--147.
[47]
Florent Levillain, Elisabetta Zibetti, and Sébastien Lefort. 2017. Interacting with non-anthropomorphic robotic artworks and interpreting their behaviour. International Journal of Social Robotics 9, 1 (Jan. 2017), 141--161.
[48]
Xiang Li, Yongping Pan, Gong Chen, and Haoyong Yu. 2016. Adaptive human--robot interaction control for robots driven by series elastic actuators. IEEE Transactions on Robotics 33, 1 (2016), 169--182.
[49]
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv:1509.02971. http://arxiv.org/abs/1509.02971.
[50]
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9 (Nov. 2008), 2579--2605.
[51]
Douglas G. Macharet and Dinei A. Florencio. 2013. Learning how to increase the chance of human-robot engagement. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2173--2179.
[52]
Henry B. Mann and Donald R. Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics 18, 1 (1947), 50--60.
[53]
Lingheng Meng, Rob Gorbet, and Dana Kulić. 2020. The effect of multi-step methods on overestimation in deep reinforcement learning. arXiv:2006.12692.
[54]
Marek P. Michalowski, Selma Sabanovic, and Reid Simmons. 2006. A spatial model of engagement for a social robot. In Proceedings of the 9th IEEE International Workshop on Advanced Motion Control. 762--767.
[55]
Bilge Mutlu and Jodi Forlizzi. 2008. Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI’08). 287--294.
[56]
Kazushi Nakazawa, Keita Takahashi, and Masahide Kaneko. 2013. Unified environment-adaptive control of accompanying robots using artificial potential field. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI’13). IEEE, Los Alamitos, CA, 199--200.
[57]
Heather L O’Brien and Elaine G. Toms. 2008. What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science and Technology 59, 6 (2008), 938--955.
[58]
Pierre-Yves Oudeyer and Frederic Kaplan. 2009. What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics 1 (2009), 6.
[59]
Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. 2017. Parameter space noise for exploration. arXiv:1706.01905. http://arxiv.org/abs/1706.01905.
[60]
Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa, and Hiroshi Ishiguro. 2016. Robot gains social intelligence through multimodal deep reinforcement learning. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids’16). IEEE, Los Alamitos, CA, 745--751.
[61]
Charles Rich, Brett Ponsler, Aaron Holroyd, and Candace L. Sidner. 2010. Recognizing engagement in human-robot interaction. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI’10). IEEE, Los Alamitos, CA, 375--382.
[62]
Jyotirmay Sanghvi, Ginevra Castellano, Iolanda Leite, André Pereira, Peter W. McOwan, and Ana Paiva. 2011. Automatic analysis of affective postures and body motion to detect engagement with a game companion. In Proceedings of the 6th International Conference on Human-Robot Interaction. ACM, New York, NY, 305--312.
[63]
Samuel Sanford Shapiro and Martin B. Wilk. 1965. An analysis of variance test for normality (complete samples). Biometrika 52, 3--4 (1965), 591--611.
[64]
Thomas B. Sheridan. 2016. Human--robot interaction: Status and challenges. Human Factors 58, 4 (2016), 525--532.
[65]
Chao Shi, Satoru Satake, Takayuki Kanda, and Hiroshi Ishiguro. 2018. A robot that distributes flyers to pedestrians in a shopping mall. International Journal of Social Robotics 10, 4 (Sept. 2018), 421--437.
[66]
Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2006. Interactive humanoid robots for a science museum. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction (HRI’06). ACM, New York, NY, 305--312.
[67]
Candace L. Sidner, Cory D. Kidd, Christopher Lee, and Neal Lesh. 2004. Where to look: A study of human-robot engagement. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI’04). ACM, New York, NY, 78--84.
[68]
Candace L. Sidner and Christopher Lee. 2003. Engagement rules for human-robot collaborative interactions. In Proceedings of the 2003 IEEE International Conference on Systems, Man, and Cybernetics: System Security and Assurance—Volume 4. 3957--3962.
[69]
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. 2014. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (Proceedings of Machine Learning Research), E. P. Xing and T. Jebara (Eds.), Vol. 32. PMLR, Bejing, China, 387--395. http://proceedings.mlr.press/v32/silver14.html.
[70]
David St.-Onge, Pierre-Yves Brches, Inna Sharf, Nicolas Reeves, Ioannis Rekleitis, Patrick Abouzakhm, Yogesh Girdhar, Adam Harmat, Gregory Dudek, and Philippe Gigure. 2017. Control, localization and human interaction with an autonomous lighter-than-air performer. Robotics and Autonomous Systems 88, C (Feb. 2017), 165--186.
[71]
David St.-Onge and Nicolas Reeves. 2010. Human interaction with flying cubic automata. In Proceedings of the 2010 IEEE/ACM International Conference on Human Robots Interaction.
[72]
Halit Bener Suay and Sonia Chernova. 2011. Effect of human guidance and state space size on interactive reinforcement learning. In Proceedings of the 20th IEEE International Symposium on Robot and Human Interaction Communication (RO-MAN’11). IEEE, Los Alamitos, CA, 1--6.
[73]
Andrea Thomaz, Guy Hoffman, and Maya Cakmak. 2016. Computational human-robot interaction. Foundations and Trends in Robotics 4, 2--3 (Dec. 2016), 105--223.
[74]
Andrea L. Thomaz and Cynthia Breazeal. 2006. Reinforcement learning with human teachers: Evidence of feedback and guidance with implications for learning performance. In Proceedings of the 21st National Conference on Artificial Intelligence—Volume 1 (AAAI’06). 1000--1005. http://dl.acm.org/citation.cfm?id=1597538.1597696.
[75]
Andrea Lockerd Thomaz, Guy Hoffman, and Cynthia Breazeal. 2005. Real-time interactive reinforcement learning for robots. In Proceedings of the AAAI 2005 Workshop on Human Comprehensible Machine Learning.
[76]
Panagiota Tsarouchi, Sotiris Makris, and George Chryssolouris. 2016. Human--robot interaction review and challenges on task planning and programming. International Journal of Computer Integrated Manufacturing 29, 8 (2016), 916--931.
[77]
George Velentzas, Theodore Tsitsimis, Iñaki Rañó, Costas Tzafestas, and Mehdi Khamassi. 2018. Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction. Paladyn, Journal of Behavioral Robotics 9, 1 (2018), 235--253.
[78]
Autilia Vitiello, Giovanni Acampora, Mariacarla Staffa, Bruno Siciliano, and Silvia Rossi. 2017. A neuro-fuzzy-Bayesian approach for the adaptive control of robot proxemics behavior. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’17). IEEE, Los Alamitos, CA, 1--6.
[79]
Graham Wakefield, Tobias Hollerer, JoAnn Kuchera-Morin, Charles Roberts, and Matthew Wright. 2013. Spatial interaction in a multiuser immersive instrument. IEEE Computer Graphics and Applications 33, 6 (Nov. 2013), 14--20.
[80]
Chen Wang, Yanan Li, Shuzhi Sam Ge, and Tong Heng Lee. 2016. Adaptive control for robot navigation in human environments based on social force model. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA’16). IEEE, Los Alamitos, CA, 5690--5695.
[81]
Juanjuan Zhang and Chien Chern Cheah. 2015. Passivity and stability of human--robot interaction control for upper-limb rehabilitation robots. IEEE Transactions on Robotics 31, 2 (2015), 233--245.
[82]
Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: Opportunities and challenges in human--robot interaction. International Journal of Social Robotics 7, 3 (Jan. 2015), 347--360.

Cited By

View all
  • (2024)A survey of communicating robot learning during human-robot interactionThe International Journal of Robotics Research10.1177/02783649241281369Online publication date: 7-Oct-2024
  • (2024)A Taxonomy of Robot Autonomy for Human-Robot InteractionProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634993(381-393)Online publication date: 11-Mar-2024
  • (2023)Virtual Reality Solutions Employing Artificial Intelligence Methods: A Systematic Literature ReviewACM Computing Surveys10.1145/356502055:10(1-29)Online publication date: 2-Feb-2023
  • Show More Cited By

Index Terms

  1. Learning to Engage with Interactive Systems: A Field Study on Deep Reinforcement Learning in a Public Museum

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image ACM Transactions on Human-Robot Interaction
          ACM Transactions on Human-Robot Interaction  Volume 10, Issue 1
          Research Notes
          March 2021
          202 pages
          EISSN:2573-9522
          DOI:10.1145/3407734
          Issue’s Table of Contents
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 20 October 2020
          Accepted: 01 June 2020
          Revised: 01 March 2020
          Received: 01 April 2019
          Published in THRI Volume 10, Issue 1

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Living architecture
          2. adaptive system
          3. engagement
          4. group interaction
          5. human-robot interaction
          6. interactive system
          7. natural setting interaction
          8. open-world interaction
          9. reinforcement learning
          10. robotic arts
          11. robotic sculpture
          12. social robot
          13. voluntary engagement

          Qualifiers

          • Research-article
          • Research
          • Refereed

          Funding Sources

          • Social Sciences and Humanities Research Council of Canada

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)270
          • Downloads (Last 6 weeks)29
          Reflects downloads up to 12 Nov 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)A survey of communicating robot learning during human-robot interactionThe International Journal of Robotics Research10.1177/02783649241281369Online publication date: 7-Oct-2024
          • (2024)A Taxonomy of Robot Autonomy for Human-Robot InteractionProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634993(381-393)Online publication date: 11-Mar-2024
          • (2023)Virtual Reality Solutions Employing Artificial Intelligence Methods: A Systematic Literature ReviewACM Computing Surveys10.1145/356502055:10(1-29)Online publication date: 2-Feb-2023
          • (2022)Learning on the Job: Long-Term Behavioural Adaptation in Human-Robot InteractionsIEEE Robotics and Automation Letters10.1109/LRA.2022.31788077:3(6934-6941)Online publication date: Jul-2022
          • (2021)Learning to Engage in Interactive Digital ArtProceedings of the 26th International Conference on Intelligent User Interfaces10.1145/3397481.3450691(275-279)Online publication date: 14-Apr-2021

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Get Access

          Login options

          Full Access

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media