• Guhr O, Loitsch C, Weber G and Böhme H. Enhancing Usability of Voice Interfaces for Socially Assistive Robots Through Deep Learning: A German Case Study. Artificial Intelligence in HCI. (231-249).

    https://doi.org/10.1007/978-3-031-60615-1_15

  • Milde S, Runzheimer T, Friesen S, Peiffer J, Höfler J, Geis K, Milde J and Blum R. Studying Multi-modal Human Robot Interaction Using a Mobile VR Simulation. Human-Computer Interaction. (140-155).

    https://doi.org/10.1007/978-3-031-35602-5_11

  • Jeffcock J, Hansen M and Ruiz Garate V. Transformers and Human-robot Interaction for Delirium Detection. Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. (466-474).

    https://doi.org/10.1145/3568162.3576971

  • Constantin S, Eyiokur F, Yaman D, Bärmann L and Waibel A. Interactive Multimodal Robot Dialog Using Pointing Gesture Recognition. Computer Vision – ECCV 2022 Workshops. (640-657).

    https://doi.org/10.1007/978-3-031-25075-0_43

  • Krishna Sharma V, Murthy L and Biswas P. (2022). Comparing Two Safe Distance Maintenance Algorithms for a Gaze-Controlled HRI Involving Users with SSMI. ACM Transactions on Accessible Computing. 15:3. (1-23). Online publication date: 30-Sep-2022.

    https://doi.org/10.1145/3530822

  • Naik L, Palinko O, Bodenhagen L and Krüger N. Multi-modal Proactive Approaching of Humans for Human-Robot Cooperative Tasks. 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). (323-329).

    https://doi.org/10.1109/RO-MAN50785.2021.9515475

  • Krishna Sharma V, Saluja K, Mollyn V and Biswas P. Eye Gaze Controlled Robotic Arm for Persons with Severe Speech and Motor Impairment. ACM Symposium on Eye Tracking Research and Applications. (1-9).

    https://doi.org/10.1145/3379155.3391324

  • Showers A and Si M. Pointing Estimation for Human-Robot Interaction Using Hand Pose, Verbal Cues, and Confidence Heuristics. Social Computing and Social Media. Technologies and Analytics. (403-412).

    https://doi.org/10.1007/978-3-319-91485-5_31

  • Ouellet S and Michaud F. (2018). Enhanced automated body feature extraction from a 2D image using anthropomorphic measures for silhouette analysis. Expert Systems with Applications: An International Journal. 91:C. (270-276). Online publication date: 1-Jan-2018.

    https://doi.org/10.1016/j.eswa.2017.09.006

  • Munteanu C and Salah A. Multimodal technologies for seniors. The Handbook of Multimodal-Multisensor Interfaces. (319-362).

    https://doi.org/10.1145/3015783.3015793

  • Bastianelli E, Nardi D, Aiello L, Giacomelli F and Manes N. (2016). Speaky for robots. Applied Intelligence. 44:1. (43-66). Online publication date: 1-Jan-2016.

    https://doi.org/10.1007/s10489-015-0695-5

  • Ishi C, Even J and Hagita N. Speech activity detection and face orientation estimation using multiple microphone arrays and human position information. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (5574-5579).

    https://doi.org/10.1109/IROS.2015.7354167

  • Gemignani G, Veloso M and Nardi D. Language-Based Sensing Descriptors for Robot Object Grounding. RoboCup 2015: Robot World Cup XIX. (3-15).

    https://doi.org/10.1007/978-3-319-29339-4_1

  • Mavridis N. (2015). A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems. 63:P1. (22-35). Online publication date: 1-Jan-2015.

    https://doi.org/10.1016/j.robot.2014.09.031

  • Cuayáhuitl H, Kruijff-Korbayová I and Dethlefs N. (2014). Nonstrict Hierarchical Reinforcement Learning for Interactive Systems and Robots. ACM Transactions on Interactive Intelligent Systems. 4:3. (1-30). Online publication date: 21-Nov-2014.

    https://doi.org/10.1145/2659003

  • Mehlmann G, Häring M, Janowski K, Baur T, Gebhard P and André E. Exploring a Model of Gaze for Grounding in Multimodal HRI. Proceedings of the 16th International Conference on Multimodal Interaction. (247-254).

    https://doi.org/10.1145/2663204.2663275

  • Milliez G, Ferreira E, Fiore M, Alami R and Lefèvre F. Simulating Human-Robot Interactions for Dialogue Strategy Learning. Proceedings of the 4th International Conference on Simulation, Modeling, and Programming for Autonomous Robots - Volume 8810. (62-73).

    https://doi.org/10.1007/978-3-319-11900-7_6

  • Xiao Y, Zhang Z, Beck A, Yuan J and Thalmann D. (2014). Human-robot interaction by understanding upper body gestures. Presence: Teleoperators and Virtual Environments. 23:2. (133-154). Online publication date: 1-Aug-2014.

    /doi/10.5555/2812348.2812351

  • Xiao Y, Zhang Z, Beck A, Yuan J and Thalmann D. (2014). Human. Presence: Teleoperators and Virtual Environments. 23:2. (133-154). Online publication date: 1-Aug-2014.

    /doi/10.5555/2674052.2674055

  • Krastev A, Lekova A, Dimitrova M and Chavdarov I. An interactive technology to support education of children with hearing problems. Proceedings of the 15th International Conference on Computer Systems and Technologies. (445-451).

    https://doi.org/10.1145/2659532.2659603

  • Fukui R, Watanabe M, Shimosaka M and Sato T. (2014). Hand-shape classification with a wrist contour sensor. International Journal of Robotics Research. 33:4. (658-671). Online publication date: 1-Apr-2014.

    https://doi.org/10.1177/0278364913507984

  • Lucignano L, Cutugno F, Rossi S and Finzi A. A dialogue system for multimodal human-robot interaction. Proceedings of the 15th ACM on International conference on multimodal interaction. (197-204).

    https://doi.org/10.1145/2522848.2522873

  • Randelli G, Bonanni T, Iocchi L and Nardi D. (2013). Knowledge acquisition through human---robot multimodal interaction. Intelligent Service Robotics. 6:1. (19-31). Online publication date: 1-Jan-2013.

    https://doi.org/10.1007/s11370-012-0123-1

  • Tan J and Inamura T. Extending chatterbot system into multimodal interaction framework with embodied contextual understanding. Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. (251-252).

    https://doi.org/10.1145/2157689.2157780

  • Cuayáhuitl H and Dethlefs N. (2011). Spatially-aware dialogue control using hierarchical reinforcement learning. ACM Transactions on Speech and Language Processing . 7:3. (1-26). Online publication date: 1-May-2011.

    https://doi.org/10.1145/1966407.1966410

  • Smith A and Brown E. (2011). Myoelectric control techniques for a rehabilitation robot. Applied Bionics and Biomechanics. 8:1. (21-37). Online publication date: 1-Jan-2011.

    /doi/10.5555/2597335.2597338

  • Avilés H, Alvarado-González M, Venegas E, Rascón C, Meza I and Pineda L. Development of a tour-guide robot using dialogue models and a cognitive architecture. Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence. (512-521).

    /doi/10.5555/1948131.1948196

  • Rascón C, Avilés H and Pineda L. Robotic orientation towards speaker for human-robot interaction. Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence. (10-19).

    /doi/10.5555/1948131.1948135

  • Kanda T, Shiomi M, Miyashita Z, Ishiguro H and Hagita N. (2010). A communication robot in a shopping mall. IEEE Transactions on Robotics. 26:5. (897-913). Online publication date: 1-Oct-2010.

    https://doi.org/10.1109/TRO.2010.2062550

  • Kasun Prasanga D and Ohnishi K. Simultaneous locomotion of biped robot with the transmission of human motion. IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society. (797-802).

    https://doi.org/10.1109/IECON.2016.7793264