Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Challenges in Human-Robot Interactions for Social Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 8669

Special Issue Editors


E-Mail Website
Guest Editor
Instituto Superior Técnico, Lisbon University, 1649-004 Lisboa, Portugal
Interests: social robotics; human-robot interaction; architectural aspects of robotics, including robot modelling and control; networked and cooperative robotics; systems integration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Robotics, University Carlos III of Madrid, Avda. de la Universidad 30, Leganés, 28911 Madrid, Spain
Interests: social robots; assistive robotics; human-robot interaction; autonomous robots; decision making; multimodal dialogue management; robot expressiveness; artificial emotions
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of System Engineering and Automation, Carlos III University, Madrid, Spain
Interests: human–robot interaction; feature extraction; pattern matching; speaker recognition; speech-based user interfaces; time-frequency analysis

E-Mail Website
Guest Editor
Department of Systems Engineering and Automation Carlos III University, Madrid, Spain
Interests: social robotics; computer vision; perception; activity detection; human-robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gradually, social robots are becoming increasingly influential in human societies, and successful interactions between robots and humans are essential to ensure the former’s integration and acceptance.

This Special Issue focuses on human–robot interactions in social robotics. Nowadays, interactions with computational devices are ubiquitous in daily life, and the lessons acquired have helped design our interactions with robots. We must improve the effectiveness of the interactions between humans and robots to learn about humans in their social environment. Moreover, existing technologies in social robotics affect how humans interact with these robots. Novel technologies and applications endow robots and humans with new ways to interact that will potentially improve human–robot interactions.

Robots complying with social norms, the challenges in developing interfaces to interact with social robots, the dynamics of human perception, novel interaction capabilities, sustainability, closing the technological gap between humans and robots, the design of the robots considering their purpose within the interaction, and the future of social robots are just a few examples of themes this Special Issue aims to cover.

Dr. João Silva Sequeira
Dr. Álvaro Castro Gonzalez
Dr. Fernando Alonso Martín
Dr. José Carlos Castillo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-robot interaction
  • social robots
  • robot interfaces
  • human perception
  • interaction metrics
  • sustainability
  • human-robot gap

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6562 KiB  
Article
Personalized Cognitive Support via Social Robots
by Jaime Andres Rincon Arango, Cedric Marco-Detchart and Vicente Javier Julian Inglada
Sensors 2025, 25(3), 888; https://doi.org/10.3390/s25030888 - 31 Jan 2025
Abstract
This paper explores the use of personalized cognitive support through social robots to assist the elderly in maintaining cognitive health and emotional well-being. As aging populations grow, the demand for innovative solutions to address issues like loneliness, cognitive decline, and physical limitations increases. [...] Read more.
This paper explores the use of personalized cognitive support through social robots to assist the elderly in maintaining cognitive health and emotional well-being. As aging populations grow, the demand for innovative solutions to address issues like loneliness, cognitive decline, and physical limitations increases. The studied social robots utilize machine learning and advanced sensor technology to deliver real-time adaptive interactions, including cognitive exercises, daily task assistance, and emotional support. Through responsive and personalized features, the robot enhances user autonomy and improves quality of life by monitoring physical and emotional states and adapting to the needs of each user. This study also examines the challenges of implementing assistive robots in home and healthcare settings, offering insights into the evolving role of AI-powered social robots in eldercare. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>EmIR robot [<a href="#B33-sensors-25-00888" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Real image of E-Bot companion robot [<a href="#B38-sensors-25-00888" class="html-bibr">38</a>].</p>
Full article ">Figure 3
<p>MTY robot [<a href="#B39-sensors-25-00888" class="html-bibr">39</a>].</p>
Full article ">Figure 4
<p>Prototype assistance [<a href="#B40-sensors-25-00888" class="html-bibr">40</a>].</p>
Full article ">Figure 5
<p>Social Robot uc3m [<a href="#B41-sensors-25-00888" class="html-bibr">41</a>].</p>
Full article ">
24 pages, 1173 KiB  
Article
A Comprehensive Analysis of a Social Intelligence Dataset and Response Tendencies Between Large Language Models (LLMs) and Humans
by Erika Mori, Yue Qiu, Hirokatsu Kataoka and Yoshimitsu Aoki
Sensors 2025, 25(2), 477; https://doi.org/10.3390/s25020477 - 15 Jan 2025
Viewed by 531
Abstract
In recent years, advancements in the interaction and collaboration between humans and have garnered significant attention. Social intelligence plays a crucial role in facilitating natural interactions and seamless communication between humans and Artificial Intelligence (AI). To assess AI’s ability to understand human interactions [...] Read more.
In recent years, advancements in the interaction and collaboration between humans and have garnered significant attention. Social intelligence plays a crucial role in facilitating natural interactions and seamless communication between humans and Artificial Intelligence (AI). To assess AI’s ability to understand human interactions and the components necessary for such comprehension, datasets like Social-IQ have been developed. However, these datasets often rely on a simplistic question-and-answer format and lack justifications for the provided answers. Furthermore, existing methods typically produce direct answers by selecting from predefined choices without generating intermediate outputs, which hampers interpretability and reliability. To address these limitations, we conducted a comprehensive evaluation of AI methods on a video-based Question Answering (QA) benchmark focused on human interactions, leveraging additional annotations related to human responses. Our analysis highlights significant differences between human and AI response patterns and underscores critical shortcomings in current benchmarks. We anticipate that these findings will guide the creation of more advanced datasets and represent an important step toward achieving natural communication between humans and AI. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>A comparison of existing methods, GPT-4 Turbo, and human responses to the Social-IQ 2.0 dataset reveals distinct differences in output characteristics. Existing methods generate only the selected options, which often lack interpretability and reliability. In contrast, GPT-4 Turbo and humans can provide justifications alongside the selected options. However, since the Social-IQ 2.0 dataset does not include ground truth labels for reasoning, it is not possible to evaluate the validity of these justifications. This article addresses this limitation by comparing human responses with those from models.</p>
Full article ">Figure 2
<p>Visualization of human and GPT-4 Turbo performance. The chart illustrates the proportion of correct, wrong, and unanswerable responses in the overall dataset, with darker shades indicating higher proportions.</p>
Full article ">Figure 3
<p>Example: Cases where humans answered correctly while GPT-4 Turbo provided a wrong answer.</p>
Full article ">Figure 4
<p>Example: Cases where humans marked the question as unanswerable while GPT-4 Turbo answered correctly.</p>
Full article ">Figure 5
<p>Visualization of human and GPT-4o performance. The chart illustrates the proportion of correct, wrong, and unanswerable responses in the overall dataset, with darker shades indicating higher proportions.</p>
Full article ">Figure 6
<p>Example: Cases where humans answered correctly while GPT-4o provided a wrong answer.</p>
Full article ">Figure 7
<p>Example: Cases where humans marked the question as unanswerable while GPT-4o answered correctly.</p>
Full article ">Figure 8
<p>The percentages of human results (<b>a</b>) and GPT-4 Turbo results (<b>b</b>) categorized by question type. The blue bar graphs represent the proportion of correct samples for each question type—Timestamp, Emotion, and Cause, from left to right. Orange bars indicate the proportions of wrong responses, and green bars represent unanswerable responses, displayed similarly. The red bar graphs show the proportion of each question type relative to all samples. Deviations from the values in the red bars indicate particularly high or low accuracy.</p>
Full article ">Figure 9
<p>The percentages of human results (<b>a</b>) and GPT-4 Turbo results (<b>b</b>) categorized by question type. The blue bar graphs represent the proportion of correct samples for each question type, Timestamp, Emotion, and Cause, from left to right. Orange bars indicate the proportions of wrong responses, and green bars represent unanswerable responses, displayed similarly. The red bar graphs show the proportion of each question type relative to all samples. Deviations from the values in the red bars indicate particularly high or low accuracy.</p>
Full article ">Figure 10
<p>Accuracy by the discriminability of options. The scoring is defined as follows: a score of 3 indicates that all four options are semantically distinguishable; a score of 2 denotes that two or more options are not semantically distinguishable; and a score of 1 signifies that two or more options are completely identical.</p>
Full article ">
21 pages, 3698 KiB  
Article
Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction
by Da-Young Kim, Hyo Jeong Lym, Hanna Lee, Ye Jun Lee, Juhyun Kim, Min-Gyu Kim and Yunju Baek
Sensors 2024, 24(24), 7939; https://doi.org/10.3390/s24247939 - 12 Dec 2024
Viewed by 608
Abstract
Dialogue systems must understand children’s utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child–robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot [...] Read more.
Dialogue systems must understand children’s utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child–robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children’s intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children’s speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM–human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child’s answer to a robot’s question diverge. The latter data involve robot responses suitable for children’s utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children’s utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children’s utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>Overview of AI home robot service and interaction design from our previous study.</p>
Full article ">Figure 2
<p>Results of Godspeed questionnaire.</p>
Full article ">Figure 3
<p>Process of fine-tuning dataset construction (Q: robot’s question; A: child’s answer; R: interactive response).</p>
Full article ">Figure 4
<p>Example of prompts and response judgment data provided to LLM and humans.</p>
Full article ">Figure 5
<p>Structure of fine-tuning dataset with message roles.</p>
Full article ">Figure 6
<p>Comparison of dialogue systems for child’s utterance with lack of specificity.</p>
Full article ">Figure 7
<p>Comparison of dialogue systems for child’s utterance with subtle affirmative expression.</p>
Full article ">Figure 8
<p>Comparison of dialogue systems for child’s utterance with mispronunciation or misrecognition.</p>
Full article ">Figure 9
<p>Evaluation results for dialogue system.</p>
Full article ">Figure A1
<p>Dialogue system prompts.</p>
Full article ">
17 pages, 1797 KiB  
Article
The Role of Name, Origin, and Voice Accent in a Robot’s Ethnic Identity
by Jessica K. Barfield
Sensors 2024, 24(19), 6421; https://doi.org/10.3390/s24196421 - 4 Oct 2024
Cited by 1 | Viewed by 932
Abstract
This paper presents the results of an experiment that was designed to explore whether users assigned an ethnic identity to the Misty II robot based on the robot’s voice accent, place of origin, and given name. To explore this topic a 2 × [...] Read more.
This paper presents the results of an experiment that was designed to explore whether users assigned an ethnic identity to the Misty II robot based on the robot’s voice accent, place of origin, and given name. To explore this topic a 2 × 3 within subject study was run which consisted of a humanoid robot speaking with a male or female gendered voice and using three different voice accents (Chinese, American, Mexican). Using participants who identified as American, the results indicated that users were able to identify the gender and ethnic identity of the Misty II robot with a high degree of accuracy based on a minimum set of social cues. However, the version of Misty II presenting with an American ethnicity was more accurately identified than a robot presenting with cues signaling a Mexican or Chinese ethnicity. Implications of the results for the design of human-robot interfaces are discussed. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>The Misty II robot used in the experiment (image used with permission from MistyRobotics, Boulder, CO, USA).</p>
Full article ">Figure 2
<p>Cues to ethnicity presented in the study.</p>
Full article ">Figure 3
<p>The percept accuracy of correctly identifying robot ethnicity (<span class="html-italic">N</span> = 48).</p>
Full article ">Figure 4
<p>Accuracy of determining robot ethnicity based on robot origin as presented in the robot narrative (<span class="html-italic">N</span> = 48).</p>
Full article ">Figure 5
<p>Response accuracy determining robot ethnicity based on robot accent (<span class="html-italic">N</span> = 48).</p>
Full article ">
12 pages, 1823 KiB  
Article
When Trustworthiness Meets Face: Facial Design for Social Robots
by Yao Song and Yan Luximon
Sensors 2024, 24(13), 4215; https://doi.org/10.3390/s24134215 - 28 Jun 2024
Cited by 1 | Viewed by 1316
Abstract
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its [...] Read more.
As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its market success and related emotional benefit to users, the specific design of the eye and mouth shape of a social robot in eliciting trustworthiness has received only limited attention. In order to address this research gap, our study conducted a 2 (eye shape) × 3 (mouth shape) full factorial between-subject experiment. A total of 211 participants were recruited and randomly assigned to the six scenarios in the study. After exposure to the stimuli, perceived trustworthiness and robot attitude were measured accordingly. The results showed that round eyes (vs. narrow eyes) and an upturned-shape mouth or neutral mouth (vs. downturned-shape mouth) for social robots could significantly improve people’s trustworthiness and attitude towards social robots. The effect of eye and mouth shape on robot attitude are all mediated by the perceived trustworthiness. Trustworthy human facial features could be applied to the robot’s face, eliciting a similar trustworthiness perception and attitude. In addition to empirical contributions to HRI, this finding could shed light on the design practice for a trustworthy-looking social robot. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>The theoretical model of the current study.</p>
Full article ">Figure 2
<p>Mouth and eye shape interaction on trustworthiness evaluation.</p>
Full article ">Figure 3
<p>The trustworthiness and attitude evaluation for six scenarios. Note: ** means significant &lt; 0.05; ns means non-significant.</p>
Full article ">Figure 4
<p>The effect of mouth and eye shape on trustworthiness (<b>left</b>) and attitude (<b>right</b>) towards the social robot. Note: ** means significant &lt; 0.05; ns means non-significant.</p>
Full article ">Figure 5
<p>Trustworthiness mediates the effect of eye shape on robot attitude. Note: *** means significant &lt; 0.01; ** means significant &lt; 0.05.</p>
Full article ">Figure 6
<p>Trustworthiness mediates the effect of mouth shape on robot attitude. Note: *** means significant &lt; 0.01.</p>
Full article ">
32 pages, 4861 KiB  
Article
Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication
by Enrique Fernández-Rodicio, Álvaro Castro-González, Juan José Gamboa-Montero, Sara Carrasco-Martínez and Miguel A. Salichs
Sensors 2024, 24(11), 3671; https://doi.org/10.3390/s24113671 - 5 Jun 2024
Viewed by 1208
Abstract
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s [...] Read more.
Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker’s emotional and motivational state) communication. Thus, to enhance human–robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot’s expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot’s mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users’ perception and can convey a recognizable affective state. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>Mini, a social robot developed for interacting with adults who are older that suffer from mild cases of cognitive impairment.</p>
Full article ">Figure 2
<p>Software architecture integrated in Mini.</p>
Full article ">Figure 3
<p>Overview of the Expression Manager. The blocks in grey (i.e., affect generator, HRI Manager, and output interfaces) are outside of this work’s scope.</p>
Full article ">Figure 4
<p>Process followed for expressing Mini’s internal state. Actions in blue blocks are performed in the affect-generation module; actions in orange blocks are performed in the Expression Executor; actions in green blocks are performed in the Interface Players.</p>
Full article ">Figure 5
<p>Response time for the Expression Manager, understood as the time that passes between the moment a gesture request is sent until the first action is sent to the output interface. Bars represent the average value; whiskers represent the standard deviation; the green line represents the threshold for responding to a stimulus; the blue line is the threshold for identifying if a stimulus requires a response; the red line is the threshold we have defined for conscious interactions. Bars with the name of an interface correspond to expressions with a single action, while Full gesture corresponds to a multimodal gesture that performs multiple actions.</p>
Full article ">Figure 6
<p>Mini during the quiz game conducted as part of the evaluation. The robot’s tablet shows the question the robot has asked and the four possible answers (all of them appear in Spanish).</p>
Full article ">Figure 7
<p>Evolution of the affective state expression during the experiment.</p>
Full article ">Figure 8
<p>Confusion matrices representing the results of the affective state-recognition evaluation. Rows in the matrices represent the affective state the robot was expressing, while columns represent the options selected by the participants (as a percentage). Cases where participants selected the correct affective state have been highlighted with a thick black border. The intensity of the colour is directly tied to the percentage of participants selecting each option.</p>
Full article ">Figure 9
<p>Average value for the ratings computed for each of the dimensions on the RoSAS questionnaire (warmth, competence, discomfort) for both conditions. Bars represent the average rating, while whiskers represent the 95% confidence intervals.</p>
Full article ">Figure 10
<p>Evolution of the amplitude and speed parameters during the game of guessing the location of famous landmarks.</p>
Full article ">Figure 11
<p>Average value for the ratings computed for each of the dimensions on the RoSAS questionnaire (warmth, competence, discomfort) for both conditions. Bars represent the average rating, while whiskers represent the 95% confidence intervals. The bars connected with an asterisk are those for which significant differences were observed.</p>
Full article ">Figure 12
<p>Average value for the competence rating when considering only users that reported either a mid-high or high interest in owning a robot and users that reported either a mid-high or high familiarity with technology. Bars represent the average rating, while whiskers represent the 95% confidence intervals. The bars connected with an asterisk are those for which significant differences were observed.</p>
Full article ">
22 pages, 13026 KiB  
Article
Development of a Personal Guide Robot That Leads a Guest Hand-in-Hand While Keeping a Distance
by Hironobu Wakabayashi, Yutaka Hiroi, Kenzaburo Miyawaki and Akinori Ito
Sensors 2024, 24(7), 2345; https://doi.org/10.3390/s24072345 - 7 Apr 2024
Cited by 1 | Viewed by 1522
Abstract
This paper proposes a novel tour guide robot, “ASAHI ReBorn”, which can lead a guest by hand one-on-one while maintaining a proper distance from the guest. The robot uses a stretchable arm interface to hold the guest’s hand and adjusts its speed according [...] Read more.
This paper proposes a novel tour guide robot, “ASAHI ReBorn”, which can lead a guest by hand one-on-one while maintaining a proper distance from the guest. The robot uses a stretchable arm interface to hold the guest’s hand and adjusts its speed according to the guest’s pace. The robot also follows a given guide path accurately using the Robot Side method, a robot navigation method that follows a pre-defined path quickly and accurately. In addition, a control method is introduced that limits the angular velocity of the robot to avoid the robot’s quick turn while guiding the guest. We evaluated the performance and usability of the proposed robot through experiments and user studies. The tour-guiding experiment revealed that the proposed method that keeps distance between the robot and the guest using the stretchable arm enables the guests to look around the exhibits compared with the condition where the robot moved at a constant velocity. Full article
(This article belongs to the Special Issue Challenges in Human-Robot Interactions for Social Robotics)
Show Figures

Figure 1

Figure 1
<p>The tour guide robot “ASAHI ReBorn”. This robot moves along the guide route (waypoints) using the Robot Side method (<a href="#sensors-24-02345-f002" class="html-fig">Figure 2</a>) while the guest holds the robot’s hand. The robot controls its velocity according to the distance to the guest.</p>
Full article ">Figure 2
<p>Path following using the Robot Side method. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math> denote the three states of the method. (<b>a</b>) When off-path, the robot quickly returns by targeting a point on a virtual circle. (<b>b</b>) When the robot approaches the path, it avoids overshooting by moving the target to the other side of the circle and then moving it forward. (<b>c</b>) When the robot is near the path, it maintains a stable course and attitude angle alongside the path.</p>
Full article ">Figure 3
<p>The mechanism of a robot avatar with an extendable arm. The arm’s string is wound around a pulley attached to a motor unit. (<b>a</b>) Overall view. (<b>b</b>) Close view.</p>
Full article ">Figure 4
<p>The tension <math display="inline"><semantics> <msub> <mi>f</mi> <mi>a</mi> </msub> </semantics></math> and the pulling distance <math display="inline"><semantics> <msub> <mi>d</mi> <mi>a</mi> </msub> </semantics></math>. The tension is proportional to the distance, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>a</mi> </msub> <mo>=</mo> <msub> <mi>K</mi> <mi>f</mi> </msub> <msub> <mi>d</mi> <mi>a</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The robot’s translational speed <math display="inline"><semantics> <msub> <mi>v</mi> <mi>r</mi> </msub> </semantics></math> and the human–robot distance <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math>. <math display="inline"><semantics> <msub> <mi>v</mi> <mi>r</mi> </msub> </semantics></math> is controlled so that <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> becomes a pre-defined distance <math display="inline"><semantics> <msub> <mi>D</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math>, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>r</mi> </msub> <mo>=</mo> <msub> <mi>K</mi> <mi>p</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>D</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>−</mo> <msub> <mi>d</mi> <mi>h</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The block diagram of the system. The control processes for moving the base (the upper blocks) and the arm (the lower blocks) work independently.</p>
Full article ">Figure 7
<p>The tour guide robot ASAHI ReBorn. It has two robot avatars at the front and back of the body (<b>a</b>,<b>b</b>). It also has four LRFs, where LRF2 is used for navigation, and LRF4 is used for measuring the distance to the guest (LRF1 and LRF3 are not used in the experiments in this paper). As shown in (<b>c</b>), the guest pulls the hand (string) of the robot avatar mounted on the back while being guided.</p>
Full article ">Figure 8
<p>System configuration of ASAHI ReBorn. The figure’s ovals denote ROS modules. The robot avatar and arm are controlled on the Windows server, which communicates to the ROS server on Ubuntu on VMWare. The SLAM task and the mobile base control work on Ubuntu.</p>
Full article ">Figure 9
<p>The control flow of ASAHI ReBorn. Six processes (Futaba Controller, Stretchy Arm, Detect Human, Windows Server, Path Following, and Amcl) run in parallel, exchanging data using sockets. <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">p</mi> <mi>h</mi> </msub> </semantics></math> in the figure is the center coordinate of the guest measured by the LRF [<a href="#B51-sensors-24-02345" class="html-bibr">51</a>].</p>
Full article ">Figure 10
<p>Control flow of tour guidance by ASAHI ReBorn. This figure shows the total behavior of the guest (person), the robot (ASAHI) and the robot avatar.</p>
Full article ">Figure 11
<p>Temporal change of <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math>. Two patterns of <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mrow> <mo>|</mo> <mo form="prefix">cos</mo> <mrow> <mo>(</mo> <mn>0.2</mn> <mi>π</mi> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mrow> <mo>|</mo> <mo form="prefix">sin</mo> <mrow> <mo>(</mo> <mn>0.2</mn> <mi>π</mi> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, were tested.</p>
Full article ">Figure 12
<p>Example trajectories under each condition. (<b>a</b>) shows the trajectories of the robot without speed control (i.e., constant <math display="inline"><semantics> <msub> <mi>v</mi> <mi>r</mi> </msub> </semantics></math>). (<b>b</b>,<b>c</b>) are trajectories with different values of <math display="inline"><semantics> <msub> <mi>K</mi> <mi>p</mi> </msub> </semantics></math>.</p>
Full article ">Figure 13
<p>Mean squared error between the robot’s trajectory and the guidance path, and its standard deviation. (<b>a</b>) Constant speed. (<b>b</b>) Distance control.</p>
Full article ">Figure 14
<p>The guide route with a bend and the positions of the robot and the guest. (<b>a</b>) shows the initial positions of the guest and the robot. At first, the robot starts to move and the guest stays at the initial position until <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> becomes 0.97 [m]. (<b>b</b>) shows the positions when the guest starts to move.</p>
Full article ">Figure 15
<p>ASAHI ReBorn leads the guest using distance control. (<b>a</b>,<b>b</b>) shows the robot’s and the guest’s movement without and with the angular velocity control, respectively. In (<b>a</b>), the robot turns the corner at a right angle while it turns the corner gently in (<b>b</b>).</p>
Full article ">Figure 16
<p>Velocity, angular velocity, and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> without the angular velocity control. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.20</mn> </mrow> </semantics></math> [m/s]. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> [m/s]. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> [m/s] and stop.</p>
Full article ">Figure 17
<p>Velocity, angular velocity, and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> with the angular velocity control. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.20</mn> </mrow> </semantics></math> [m/s]. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> [m/s]. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> [m/s] and stop.</p>
Full article ">Figure 18
<p>Exhibitions and the guidance path. We prepared four exhibits in a room. An exhibit has an arithmetic problem, and we asked the participants to calculate the problem of all exhibits and report them at the destination. (<b>a</b>) An example of exhibit. (<b>b</b>) The guidance path and exhibits.</p>
Full article ">Figure 19
<p>The experiment of robot tour-guiding. We can confirm that the robot and guest moved slowly at each exhibit.</p>
Full article ">Figure 20
<p>The mean values of the questionnaire. The error bars show the standard error. The labels “Keep.” and “Cons.” mean the experimental conditions with and without the distance control, respectively.</p>
Full article ">Figure 21
<p>The mean values of gazing and slowly-moving time. The error bars show the standard error. The labels “Keep.” and “Cons.” mean the experimental conditions with and without the distance control, respectively. (<b>a</b>) The duration of gazing at an exhibit. (<b>b</b>) The duration of moving slowly near an exhibit.</p>
Full article ">Figure 22
<p>The histogram of the duration with duration of gaze at an exhibit.</p>
Full article ">Figure 23
<p>The mean values of the questionnaire, summarized group by group. The error bars show the standard error. (<b>a</b>) Summarized by Slow and Fast groups. (<b>b</b>) Summarized by Long and Short gaze groups.</p>
Full article ">Figure 24
<p>Guiding multiple guests using ASAHI ReBorn. The robot could move with the pace of the head, but it could not wait for other guests behind the head.</p>
Full article ">
Back to TopTop