Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Towards Data-Driven Fault Diagnostics Framework for SMPS-AEC Using Supervised Learning Algorithms
Next Article in Special Issue
Improving Path Accuracy of Mobile Robots in Uncertain Environments by Adapted Bézier Curves
Previous Article in Journal
A Hybrid Multimodal Data Fusion-Based Method for Identifying Gambling Websites
Previous Article in Special Issue
Multi-Objective Navigation Strategy for Guide Robot Based on Machine Emotion
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Human–Robot Proxemics

by
S. M. Bhagya P. Samarakoon
1,
M. A. Viraj J. Muthugala
1,* and
A. G. Buddhika P. Jayasekara
2
1
Engineering Product Development Pillar, Singapore University of Technology and Design, 8 Somapah Rd., Singapore 487372, Singapore
2
Department of Electrical Engineering, University of Moratuwa, Moratuwa 10400, Sri Lanka
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(16), 2490; https://doi.org/10.3390/electronics11162490
Submission received: 30 June 2022 / Revised: 1 August 2022 / Accepted: 6 August 2022 / Published: 10 August 2022
(This article belongs to the Special Issue Path Planning for Mobile Robots)
Figure 1
<p>Robot approaching a goal position while perceiving human actions and maintaining an appropriate distance with the person; (<b>a</b>): A person doing an exercise, (<b>b</b>): A person working on a laptop, and, (<b>c</b>): Two persons having a conversation.</p> ">
Figure 2
<p>Taxonomy used in this paper to analyze the proxemics literature.</p> ">
Figure 3
<p>Hall’s proxemic zones introduced in [<a href="#B30-electronics-11-02490" class="html-bibr">30</a>].</p> ">
Figure 4
<p>Six basic types of F-formation defined by Ciolek and Kendon [<a href="#B39-electronics-11-02490" class="html-bibr">39</a>].</p> ">
Figure 5
<p>An overview of the HRP study conducted in [<a href="#B49-electronics-11-02490" class="html-bibr">49</a>]. (<b>a</b>): with internal noise levels of the robot, (<b>b</b>): An anthropomorphic robot head, (<b>c</b>): A manipulator, and (<b>d</b>): A service robot.</p> ">
Figure 6
<p>The experimental arrangement of the HRP study reported in [<a href="#B58-electronics-11-02490" class="html-bibr">58</a>].</p> ">
Figure 7
<p>The taxonomy of cases considered in the study [<a href="#B66-electronics-11-02490" class="html-bibr">66</a>].</p> ">
Figure 8
<p>The set of locations defined around a user for determining the comfortable HRP by the poxemic planner proposed in [<a href="#B67-electronics-11-02490" class="html-bibr">67</a>].</p> ">
Figure 9
<p>An overview of the ANFIS proposed in [<a href="#B70-electronics-11-02490" class="html-bibr">70</a>].</p> ">
Figure 10
<p>An overview of the system proposed in [<a href="#B71-electronics-11-02490" class="html-bibr">71</a>].</p> ">
Figure 11
<p>Motivation behind the method proposed in [<a href="#B72-electronics-11-02490" class="html-bibr">72</a>]. (<b>a</b>): a small interpersonal distance is sufficient since body joints are not much extended and not moving fast. (<b>b</b>): large interpersonal distance is required since body joints are widely extended with a considerable speed.</p> ">
Figure 12
<p>The topology of the deep learning network proposed in [<a href="#B75-electronics-11-02490" class="html-bibr">75</a>].</p> ">
Figure 13
<p>The method proposed in [<a href="#B80-electronics-11-02490" class="html-bibr">80</a>] to adapt a robot’s behavior based on HRP.</p> ">
Figure 14
<p>Proxemic aware passing strategy proposed in [<a href="#B83-electronics-11-02490" class="html-bibr">83</a>].</p> ">
Figure 15
<p>HRP aware path planing strategy proposed in [<a href="#B85-electronics-11-02490" class="html-bibr">85</a>].</p> ">
Versions Notes

Abstract

:
An emerging trend in utilizing service robots in a vast range of application areas could be seen nowadays as a promising effort to uplift the living standard. These service robots are intended to be used by non-expert users, and their service tasks often require navigation in human-populated environments. Thus, human-friendly navigation behavior is expected from these robots by users. A service robot should be aware of Human–Robot Proxemics (HRP) to facilitate human-friendly navigation behavior. This paper presents a review on HRP. Both user studies conducted for exploring HRP preferences and methods developed toward establishing HRP awareness in service robots are considered within the scope of the review. The available literature has been scrutinized to identify the limitations of state of the art and potential future work. Furthermore, important HRP parameters and behavior revealed by the existing user studies are summarized under one roof to smooth the availability of data required for developing HRP-aware behavior in service robots.

1. Introduction

A service robot can be defined as “a robot that performs useful tasks for humans or equipment excluding industrial automation applications” conferred to the International Federation of Robotics [1]. The autonomy of service robots ranges from partially to fully autonomous, which can perform a meaningful and purposive task based on information gathered from their environment, user, and knowledge [2]. These service robots play a vital role in the present world since the service robots are utilized in enormous application areas including education [3,4,5], health-care [6,7], entertainment [8,9], cleaning [10,11,12], and guidance [13,14,15]. Moreover, the utilization of service robots to support day-to-day tasks improves the quality of life.
Service robots with intelligence are capable of increasing productivity and cost reduction in the growth of sales in the robotics market [16]. Many robotics designs are introduced in such a way that to be a part of ordinary people. However, most of the users of these robots do not possess much knowledge about the robotic domain. Therefore, the users prefer to have human-friendly features in these service robots [17,18]. In this regard, the service robots should be integrated with human-like cognitive decision-making abilities. A robot intended for direct interactions with humans in human-shared environments is known as a cobot, which means a collaborative robot. According to [19], cobots are proposed to explicitly interact with humans and coping a shared payload, including industrial robots. Thus, cobots should have proper human–robot interaction capabilities to enhance their usability.
A service robot frequently needs to navigate when performing typical activities. The developments of navigation algorithms such as A-star [20], SLAM [21], Dijkstra [22], and Vector field histogram [23] attempted to improve the navigation functionalities of service robots such as path planning while obstacle avoidance. These robots often operate in human-populated environments. Thus, the path planning module of a service robot should consist of human-aware navigation capabilities [24,25]. Moreover, human-friendly navigation behavior is expected from these robots by users [26,27].
Human-friendly navigation capabilities of a service robot would enhance the overall interaction between the robot and a user, ultimately increasing user satisfaction [28,29]. A service robot should be capable of maintaining a proper distance and direction with a user while approaching the user or navigating toward a goal. This scenario can be explained by the example shown in Figure 1, which represents a robot navigating from the initial position ‘O’ to the goal position in different contexts. In Figure 1a, a person is doing an exercise, and movements change rapidly. The person requires a large free space around him. Thus, the robot should maintain a long distance with the user during the navigation in this case. A situation where a person works with a computer while sitting is depicted in Figure 1b. In this case, the distance to be maintained by the robot should be less than the exercise scenario. Apart from single-person scenarios, there can be multi-person scenarios, as depicted in Figure 1c, where two persons have a conversation. In this scenario, the robot should navigate toward the goal without hindering the ongoing conversation by obeying social etiquette. Therefore, Human–Robot Proxemics (HRP) plays a vital role when navigation in human-populated environments.
Proxemics is the study of human use of space or set between themselves and others during interactions or performing an activity. It can be introduced as the distance between persons when they interact with each other [30]. According to the study [31], the HRP is essential for understanding the relationships between humans and robots and for the HRI factors, rapport, cooperation, and positive experience. The work, [32] found out that users often adjust HRP to a comfortable level when a robot invades their personal space. The work [33] found that a user may get distracted from an ongoing activity if a robot does not maintain proper HRP. On the contrary, the papers [34,35] argue that the HRP alone does not influence the subjective measurements of users, such as user comfortably, and a user prefers the distance where a robot accurately perceives user instructions than comfortable HRP. Thus, the literature on HRP studies should be scrutinized to reveal the insights helpful in developing service robots. Furthermore, the development of service robots with human-like proxemic awareness is challenging due to the need to embody human-like cognitive features into the robots. Therefore, a timely written review paper on HRP would be beneficial for developing the field of service robotics since the content of the review reveals the current status, limitations, and potential future directions of this research niche.
In this regard, this paper presents a review of the literature on HRP. The literature reviewed in this paper was selected by exploring major indexing databases such as Web of Science, SCOPUS, and Google Scholar. Nonpeer reviewed, or unpublished manuscripts such as news items, theses and dissertations, web articles, and technical reports were excluded from the review paper. Books, chapters, and standards that could be used to provide supporting statements or additional information such as definitions are included in the paper. A major focus has been given to a journal in the case of the same concept was reported in two documents such as a conference paper and an extended journal article. Only the manuscripts published in English were considered for review. The gathered literature was taxonomically analyzed, and the taxonomy identified for the analysis is depicted in Figure 2. A discussion on explanatory studies on human–human proxemics is given in Section 2. Section 3 reviews the existing explanatory studies conducted to identify natural HRP behavior. Furthermore, a summary of the findings revealed by the studies is given to form a collection of facts related to HRP behavior. The state-of-the-art methods proposed for establishing HRP-aware behavior on service robots are reviewed in Section 4. The limitations of the current work and potential future work identified through the review are taxonomically presented in Section 5. Concluding remarks of the review are given in Section 6.

2. Human–Human Proxemics Studies

Anthropologist Edward T. Hall [30] introduced the term ‘Proxemics’ defined as “the interrelated observations and theories of humans use of space as a specialized elaboration of culture”. He conducted various experiments with human–human and human-animal to observe the variations of uses of space considering different contexts. This work could be considered the foundation work of proxemics. Although Hall’s proxemics experiments were conducted on Americans, the proxemics patterns in the cross-cultural context of Japan and Arabs have been deduced. Hall was inspired by Hediger’s [36] animal distance observations, wherein animal world, animals maintain distances such as fight distance and critical distance when different kinds of species meet. Similarly, he introduced distance zones that humans maintain with each other during different interaction contexts. His work reported four main proxemic zones. These zonal distributions can be represented as indicated in Figure 3. Each zone is subdivided into a close and far phase. The intimate distance—close phase is considered the physical contact or possibility of physical engagement. The distance range related to each zone is given below with their specific interaction context.
  • Intimate Distance: used for embracing, touching or whispering
    -
    Close Phase: Less than 15 cm
    -
    Far Phase: 15 to 46 cm
  • Personal Distance: used for interactions among good friends or family
    -
    Close Phase: 46 to 76 cm
    -
    Far Phase: 76 to 122 cm
  • Social Distance: used for interactions among acquaintances
    -
    Close Phase: 1.2 to 2.1 m
    -
    Far Phase: 2.1 to 3.7 m
  • Public Distance: used for public speaking
    -
    Close Phase: 3.7 to 7.6 m
    -
    Far Phase: 7.6 m or above
Argyle and Dean [37] have conducted experiments on the relationship between eye contact and interpersonal distance. According to their studies, eye contact is considered a component of intimacy and influences proxemics preferences. Moreover, proxemics distance decreased when eye contact was reduced. The study [38] discovered that most of the time, females prefer the close phase of the personal zone (defined by Hall) during human–human interactions.
During multi-person gatherings, the people arrange formations and distances between them to exchange glances, words, and gestures comfortably. The work [39] introduced the basic space arrangement of persons defined as F-formation. The basic six F-formation are shown in Figure 4. The formations provide an amicable arrangement of people for the smooth and efficient performance of a given task. The work [39] introduced many situations that can apply these F-formation for two persons, such as at the setting edge (next to a wall or similar barrier), away from the setting edges (in an open, unstructured space), in the traffic line transecting a pedestrian setting (e.g., a plaza) and on a path transecting an open non-pedestrian setting (e.g., a lawn).
The study [27] explored the approaching behavior of a person toward another person who is having a conversation with a third person. The study investigated the variation of proxemics behavior of an approaching person with three factors; orientation of two conversating persons, the initial position of the approaching person, and the distance between the conversing persons. The study found that the termination distance between the approaching person and the person of interest is independent of the orientations and the distances between the two conversating persons. Furthermore, there is no effect on the proxemics from the initial position of the approaching person. The mean termination distance was found to as 91 cm. The experiments were conducted using participants with South Asian cultural backgrounds where data are limited for the targeted population.

3. Human–Robot Proxemics (HRP) Studies

3.1. User Attributes and HRP

A study has been conducted to identify the HRP preferences of adults and children [40]. In the study, the participants were asked to move toward the robot and indicate their comfortable distance preference. A robot’s approach toward participants was also considered. It was found out that children prefer social zone and adults prefer personal zone (Hall’s zones). Effects of the personality of a user on HRP have been studied in [41,42]. Both studies conclude that there is a significant effect on HRP with user personality. According to [41], proactive is the only user personality factor that correlates with social distance, among others considered personality factors; social reluctance, timidity, and nervousness. The study [42] considered extraversion, agreeableness, conscientiousness, neuroticism, and openness as the user personality factors. They found out that users with lower neuroticism and lower openness prefer lower HRP than those who have higher neuroticism and higher openness. Effects of user gender on HRP have been studied in [43], and the work could not find a significant effect of gender on HRP. The effects of a user’s cultural background on HRP and robot–robot proxemics have been studied in [44]. Arab and Germans are the two cultures considered for the study. There was a significant effect on HRP with culture. Irrespective of the culture, user preference for robot–robot proxemic is significantly different from HRP.
The adaptation of HRP with the experience of users has been examined in several studies [45,46,47]. According to [46], HRP adapts with the experience, and the adaptation happens only within the first few interactions. The study [47] found that pet owners prefer a higher HRP than those who do not own a pet. However, contradicting outcomes were identified in the study [45], where people who own a pet have a significantly closer HRP than people who never own a pet. Furthermore, people with at least one year of experience with a robot preferred closer HRP than others.

3.2. Robot Attributes and HRP

In [48], an experiment has been conducted to explore the HRP variation concerning a robot with four different voice types; natural voice-male, natural voice-female, synthesized voice, and no voice. According to the outcomes of the experiment, a significantly higher HRP has been observed for synthesized voice than in the other three cases. Furthermore, they could not observe the effect of gender on HRP. The variation of HRP with internal noises of the robot has been studied in [49]. A user approaching a robot with different levels of machine-like noises has been considered for the study (see Figure 5a). The HRP significantly increases with the increment of the robot’s noise level. The study [50] has found the same sort of behavior in HRP when passing a robot. Furthermore, masking the noise with music could reduce the HRP.
The people who interact with a robot may consider physical appearance. Several studies have been conducted to determine the relationship between the HRP and the physical appearance of a robot. The study [49] used four different robots; a manipulator robot, two service robots, and an anthropomorphic robot head (shown in Figure 5b) to investigate this hypothesis. The HRP preference for the manipulator robot was significantly higher than the other three robots implying less human-likeliness increases HRP. Another similar study has been conducted in [51] considering a short-mechanoid, a tall-mechanoid, a short-humanoid, and a tall-humanoid robot. The results indicate that a higher degree of anthropomorphic attribution is linked to higher expectations of HRP norms. The comfortability of HRP while approaching or passing a person by a robot of different sizes has been evaluated in [52]. Here, a robot comparable to human height has been considered a large robot, while a shorter robot than human knee height has been considered a small robot for the study. The study indicated that users feel comfortable when a small robot comes closer than a large robot.
The variation of HRP with the availability of a human-like virtual reality avatar on a mobile robot has been studied in [53]. In this regard, HRP variations during the two scenarios, a user following a robot and a user avoiding a robot, were studied. The study failed to identify an effect between HRP and the human-like virtual reality avatar availability.
The HRP variation per a robot’s vocal and facial emotions has been studied in [49]. The study setups are shown in Figure 5b,c, respectively. The six primary facial emotions (i.e., happy, sad, disgust, surprise, fear, and anger) were displayed in the robot during the study. The outcomes revealed that humans prefer the highest proxemics distance with a robot when the robot’s facial emotion is angry or disgusting. In contrast, humans prefer closer HRP when a robot’s facial expression is happy or sad. In the case of vocal emotions, the robot’s synthesized voice has been altered to reflect happy, sad, angry, and fear emotions. In this case, it was found that users preferred a significantly higher HRP when the robot vocal emotion was angry. In the work [54], the authors compared HRP for happy, neutral, and sad cases. A combination of facial and body expressions was used to reflect the emotions in this work. They found that HRP in the case of happy emotion is significantly less than sad emotion.

3.3. Context and HRP

A study has been conducted to determine the human preferred approaching direction and the distance for a robot considering four different contexts [55]. Sitting on a chair in the middle of an open space, standing in the middle of an open space, sitting at a table in the middle of an open space, and standing with their back against a wall were considered as the contexts in this regard. Front left and front right were observed as the most comfortable approaching direction for all scenarios within the personal distance range. Approaching from the rear direction was the least preferred. In studies [42,43], the variation of HRP according to the user’s body posture has been studied. Study [42] identified a significant effect of user body posture on the preference of HRP. The body postures, standing and walking, have a higher HRP than the body postures, sitting and lying, which have a lower pose. However, the study [43] failed to identify a significant effect of user body posture on HRP. They considered only sitting and standing as body postures. Thus, the results are compliant with that of the study [42].
The study [43] observed a significant impact of robot posture on HRP, where lower HRP is preferred in the case of sitting than standing. The work [45,47] examined the variation of HRP with robot gaze. Two different gazing contexts of a robot, a robot gazing toward a user’s head or leg, were considered in the study [45]. The study found a significant effect from gaze when the context is segregated based on gender. In the case of robot gaze toward a user’s head, women prefer significantly higher HRP than men. In contrast, there is no significant difference in the gaze toward the legs. However, no significant impact of robot gaze on HRP could be found without considering the gender-based segregation. In [47], contexts of mutual and averted gazes of a robot have been considered. However, no effect from gaze (general or segregated on gender) could not be observed. The variation of HRP with a robot’s service task has been studied in [56]. The study considered two service tasks where a robot handing over a soft drink and a hat. The participants have no significant difference in proxemic distance preference in the two tasks. However, the study found that the preference for a robot’s approaching direction significantly alters the considered two service tasks in fresh interactions. During long-term interactions, this difference vanishes.
The variation of HRP according to the ways of approaching, a human toward a robot and a robot toward a human, has been studied in [40,41,46]. The studies concluded that HRP is not significantly dependent on these two approaching contexts. However, according to [46] users do not prefer closer HRP when a robot approaches a user in scenarios where the user’s mobility is restricted. The study [57] found that participants prefer a robot’s approaching from the right direction to hand over an object to a person sitting on a chair. Approaching from the front was the least preferred in this context. The work [58,59] conducted a user study for analyzing the stopping distance when a robot approaches two persons during a conversation. The robot was moved along different approaching paths as shown in Figure 6 when the two persons were sitting on the chairs, and the comfortable HRP decided by the participants was recorded. The outcomes of the study can be summarized as follows. In general, the preferred distance was outside of Hall’s personal zone. There is a significant effect of approaching direction on stopping distance and user comfortability. Users feel the highest comfortability when a robot approaches from the front, while the least comfortability can be observed from approaching the robot from −70° direction.
Comfortable HRP during passing scenarios of robots and humans has been studied in [60]. In this regard, cases of the passing of a robot from the left and the right of a participant were considered. There is no significant effect from the passing side per the observations of the study. In contrast, the user comfortability increases significantly with the increment of passing HRP. The study [53] examined the variation of HRP during a human following or avoiding a robot. The study found that a robot’s moving speed significantly affects HRP during following and avoiding contexts. The variation of the HRP with the speed of the robot was also accounted for in the study. During the context of following a robot, HRP increases with its speed. In contrast, HRP decreases with the robot’s speed in the context of avoidance.

3.4. HRP Studies on Emerging Directions

3.4.1. HRP in Virtual Reality (VR) Settings

Explorations of HRP in VR scenarios are an emerging area found in the literature. In study [61], the authors compared the variation in HRP in VR and the real world. Overall, they found that comfortable HRP was significantly higher in VR than in the real world. In the case of VR, four distinct scenarios (i.e., familiar environment, unfamiliar environment, with sound, with no sound) have been considered. The HRP observed in the scenario of VR with sound is lesser than that of the scenario with no sound. A significantly higher HRP was observed for the scenario of VR with an unfamiliar environment and no sound compared to the real world. However, there is no significant difference between the scenarios of VR. In [62], a situation of exploring a room collaboratively by a robot and a user in a VR environment has been considered. Two cases of the robot, with proxemics awareness and with no proxemics awareness, have been used for the investigation. According to the outcomes, most participants preferred the robot with proxemics awareness. Therefore, it can be concluded that HRP is essential in a VR environment, similar to real-world settings.

3.4.2. Drones and HRP

The study of human proxemics with drones is another emerging research niche in HRP due to drone usage in human-populated environments. Comfortable HRP for approaching a drone toward humans has been studied in [63]. In this regard, a drone was moved to intimate, personal, and social zones (Hall’s zones), and user comfortability was evaluated. Personal distance was significantly preferred by the participant over the other two zones. When the drone came near to a participant, comfortability was decreased. There was no increase in comfortability when the drone moved out. However, participants felt calm and in control when the drone moved out. Furthermore, approaching from the front is preferred. The study [64] also examined the comfortable HRP when a drone approached a human and a human approached a drone. They found no significant difference between the cases of the drone approach and the human approach. In [65], the authors explored the variation of HRP with the approaching height of a drone. However, the study failed to identify a significant effect of approaching height on HRP.
The work [66] studied the variation of HRP per the socialness of a drone. In this regard, four cases of different degrees of socialness, as shown in Figure 7 have been considered. A drone with a social shape and a greeting voice is preferred a closer HRP compared to a drone with a nonsocial shape and no greetings. Apart from the main findings, the study concluded that pet owners prefer closer proxemics than non-pet owners. Furthermore, females preferred higher HRP than males. In addition to these, the variation of preferred HRP with height and lateral distance has also been studied in the work [66]. In a scenario where a drone approaches a person at a lower height can observe a closer HRP than that of a higher height. Users prefer closer HRP when the lateral distance of the drone’s approach is away from them.

3.5. Summary of HRP Preferences Revealed by Human–Robot Studies

A summary of HRP identified from the human–robot studies is given in Table 1. The factors/parameters altered to identify the effects on HRP, as well as the factors fixed during each study, are stated there, along with quantitative and qualitative HRP preferences. The identified comfortable HRP distances and directions of different scenarios of each study are given if the information is available in the corresponding paper. These HRP values corresponding to different scenarios would help develop HRP-aware behavior in service robots.

4. Methods Developed for Establishing Proxemics Awareness in Service Robots

4.1. Methods of Modeling HRP for Improving User Comfortability

A proxemic planner for a robot that can determine appropriate HRP has been developed in [67]. The proposed proxemics planner utilizes a proxemic preference algorithm inspired by human–human and human–robot proxemic studies presented in [47,60]. The proxemic preference algorithm considers 21 possible target coordinates around a user, as shown in Figure 8. The inner two layers are utilized for physical interactions such as fetch and carry tasks, while the outer layer is configured only for verbal interaction. The default layer for the physical interaction is the middle layer, and the innermost layer is for experienced users. The proxemics planner can determine the appropriate HRP in a particular case by selecting one of the points among these predefined coordinates based on contextual factors. The approaching direction is considered to be in the frontal region of a user.
An attempt to develop an empirical framework for determining HRP by a service robot could be seen in [68]. The paper proposed a lookup table, which indicates the variation of HRP in accordance with situations, context, and attributes of a robot and a human. The statistical conclusions gathered from previous HRP studies have been utilized for estimating the HRP for the considered lookup table entries. However, the method has not been implemented or experimented with a robot. Defining a long lookup table with many entries to cover all the probable cases is the major limitation of the work.
An Adaptive Neuro-Fuzzy Inference System (ANFIS) [69] has been developed to determine the HRP to be maintained by a service robot [70]. The ANFIS is capable of adapting HRP per the height of a user, a robot’s appearance, and user familiarity with the robot (overview of the ANFIS is given in Figure 9). A set of data collected from a human–robot user study conducted as a part of this work has been used to train and test the proposed ANFIS. The variation of human height has been considered in the range of 0.5 m–2 m. The appearance is considered human-likeliness, represented on a 1 to 5 scale. However, the deployment of the model on a robot has not been considered within the scope of the work. Moreover, the validation of the work is limited to testing with the data set.
Another development of an ANFIS for determining HRP to improve user comfort could be found in [71]. The proposed ANFIS is capable of determining proper stopping HRP during an approach of a robot based on user activity and personality that have been developed. The configuration of the proposed system is explained in Figure 10. User activity is determined by a Naive Bayesian classifier (NBC) by analyzing the inertial measurements retrieved from a wearable sensor placed on a user. User activity, personality factors, and robot velocity are fed to the ANFIS as inputs. The personality factors are identified for each user through a questionnaire. The output of the ANFIS is the appropriate HRP for stopping the approach of a robot. Even though the robot is capable of adapting the proxemics based on user activity, the adaptation is limited to a small set of predefined activities (standing, sitting, walking, and laying). Thus, the proposed method would not be feasible for more activities.
A method to determine the approaching proxemics for a service robot based on user behavior has been proposed in [72]. The method considers the dynamic movements of a human’s body joints to perceive the user behaviors instead of posture classification. Thus, the applicability of the method is not limited to a set of activities or posture categories. Furthermore, the system can vary the proxemics within the same activity based on how the user is performing an activity. The motivation of the proposed method can be explained with the example scenarios shown in Figure 11, where a human doing an exercise extending legs and arms in a wider manner requires a large interpersonal distance than a situation where arms and legs are not much extended. In the proposed system, dynamic body parameters perceived through an RGB-D sensor are used as inputs of a fuzzy inference system, where the output is the approaching proxemics of a service robot. A user study has been conducted by embedding the proposed method into a service robot. The user study has found that HRP determined by the proposed system is not significantly different from natural human–human proxemics.
A Takagi-Sugeno Kang-type [73] fuzzy inference system to model the variation of user comfort according to a robot’s approach direction and distance has been proposed [74]. Empirically available proxemic data were used to design the fuzzy inference system. The fuzzy inference system has two inputs, distance, and angle of approach. The system consists of thirty rules representing all the possible combinations. The output is user comfort. The proposed model has a prediction error of 35.6% for user comfort. This model could be used to aware a service robot on user comfort variation with approaching proxemic. The use of deep learning approaches for the same goal has been investigated in [75]. A subset of the data set discussed in [74] has been used for training different deep neural network architectures by varying the fourth and fifth layers of the deep learning topology shown in Figure 12. Feed-forward layers, gated recurrent units, and long-short-term memory are considered to make three different architectures. According to the model testing results, long-short-term memory-based architecture outperforms the other two architectures.
The work [76] proposed a method to adapt the proxemics based on user preferences identified from subconscious body signals. Here, the intimate, personal, and social distance of the robot can be adapted toward a user. The policy gradient reinforcement learning approach is utilized to adapt the proxemics behavior from the initially defined values. A rule-based algorithm to replicate the natural approaching behavior of humans toward two persons who are having a conversation has been implemented in a service robot [77]. This algorithm has been derived based on the outcomes of the study discussed in [27]. The proposed system has been validated through a user study where it found that the proposed algorithm can satisfy and comfort both persons involved in an interaction.

4.2. Methods That Adapts HRP for Enhancing Communication

In [78], the development of a method to adapt the HRP to enhance communication performance between a human and a service robot is presented. A Bayesian network was developed to represent the relationship between communication parameters (i.e., levels of voice and gesture inputs and outputs) and HRP. The model has been built based on data collected through a human–human study and a human–robot study. The scope of the work is limited to the formation of a model, and validation has not been conducted. This framework has been extended by the work [79] to determine the approaching proxemic of a service robot that would maximize the interaction of social signals. Variation of gesture and speech recognition rates with the distance is primarily used to determine the appropriate approaching proxemics by the model. Therefore, the model is capable of improving the interaction through social signals. However, the system cannot alter the proxemics based on any other factor except the characteristic of the sensory perceiving and output system.

4.3. Methods That Adapts a Robot’s Behavior Based on HRP

A method to adapt the behavior of a social robot based on HRP has been proposed in [80]. The method uses a proxemic scaling function to adapt the behavior. Two types of scaling functions, linear and logarithmic, as shown in Figure 13 have been considered. According to the Proxemic Scalar Value (PSV), corresponding to the HRP in a situation is used to adapt the behavior. Here, a robot’s behavior, such as moving velocity and voice output, is modified proportionally to the PSV. The performance of the proposed method has been evaluated through a user study. According to the outcome of the user study, both logarithmic and linear scaling functions are preferred over a system with no scaling in the aspects of user comfort and stress. In contrast, the logarithmic proxemic scaling function is preferred over the linear scaling function in terms of the robot attributes of intelligence, likability, proxemic awareness, and submissiveness.
The work [81] proposed a system that enables a robot to decide whether to initiate or terminate interaction with a couple of persons based on the proxemic behavior of the persons. The system has been developed using a Hidden Markov Model (HMM) [82], capable of detecting social events leading to interaction initiation and termination. The authors have conducted an interaction study to analyze and gather data on human proxemic behavior in natural social encounters. Physical (the distance and orientation of two persons) and psychophysical features (based upon values from the literature on the human sensory system) are considered the features that represent human proxemic behavior. The HMM has been trained to determine the required transition of behaviors (i.e., either interaction initiation or termination) based on the proxemic features fed to the model. The classification accuracy of the two social behaviors has been evaluated from testing data for validation.

4.4. Methods for HRP Aware Path Plannings

Service robots could often encounter humans when navigating through narrow spaces like corridors. The work [83] proposed a rule-based model to realize the smooth passing of a robot in such encounters. The proposed rule-based approach can be explained with the aid of Figure 14, where a robot encounters a human in a corridor. When the distance between the robot and the human is within the personal space (defined by Hall), the robot initiates a move to the right to a predefined lateral distance. After completing the passing, the robot returns to regular operation (i.e., moves toward the goal). Three lateral distances, 0.2 m, 0.3 m, and 0.4 m, were considered, and experiments found that the cases of lateral distances 0.3 m and 0.4 m have a higher user preference. The proxemic distance that initiates the movement to the right is also fixed to a predefined value. In other words, proxemic is not adapted.
A probabilistic method to determine a human-aware navigation path for a service robot has been proposed in [84]. The proposed method uses a social cost map around a person to minimize the invasion of a human’s social space during navigation. However, the method uses a fixed social cost map around a person in this regard. The HRP-aware navigation path planner proposed in [85] could enable a service robot to plan paths in environments populated with multiple persons. The HRP-aware path plan strategy can be explained with the aid of Figure 15. In this method, the personal space of an individual is represented by an asymmetric two-dimensional Gaussian function. When considering a group of humans, the individual Gaussian functions are combined through a Gaussian mixture. A path, which minimizes the overlap through the resultant personal spaces, is determined to navigate a robot to a goal. The validation of the proposed system is limited to simulations.
The model proposed in [86] enables a robot to engage a person in a human-like manner while the robot has a side by the sidewalk with another person. The method adapts its navigation behavior according to the accompanying person and other dynamic people. The best point to encounter a person is predicted using a gradient descent method that takes the dynamics of the people of interest. According to a user study, the proposed model can plan an approach and engage with people while maintaining proxemics within an acceptable range.

5. Limitations of Current Work and Potential Future Directions

As discussed in Section 3, many human–robot explanatory studies have been conducted to identify the natural HRP preferences of humans. Effects on HRP preferences from user attributes, robot attributes, and context of interaction have been primarily examined by state-of-the-art explanatory studies. The current status and extent of these explanatory studies are taxonomically summarized in Table 2. Even though many explanatory studies have already been conducted to identify the natural HRP preferences of humans, a vast amount of explorations is yet to be conducted to gather the necessary details for developing a service robot with perfect HRP-aware behavior. Potential explanatory studies that would benefit the development of HRP-aware service robots are listed in Table 2.
Many methods have been proposed to develop HRP-aware behavior on service robots (see Section 4). Some of these methods are grounded on the outcomes of the HRP explanatory studies conducted to identify natural HRP preferences. However, a minimal amount of explored behavior through explanatory studies is exploited in the present developments of establishing HRP-aware behavior on robots. Directing the research focus toward this non-exploited already identified behavior poses an opportunity for rapid development of novel methods for improving HRP aware behavior of service robots. Human–robot explanatory studies might be conducted if natural HRP behavior is not yet known or identified for potential future developments. Identifying the HRP behavior of service robots preferred by users in an area of concern and formulating a framework for replicating the required behavior on the robots are the key challenges faced by the researchers. The limitation of the state-of-the-art methods and potential future developments are given in Table 3 in terms of the following three aspects; scope, interaction, and adaptation.

6. Conclusions

This paper has reviewed literature related to HRP. HRP should be considered in the designs of service robots to improve the comfort of their users. Moreover, a service robot should possess HRP awareness when performing their service tasks in human-populated environments. For example, an assistive/caregiving robot should maintain proper proxemics while interacting with a user to improve acceptance. In addition, when delivering something to a user, the robot needs to identify the context and maintain proper distance and directions that satisfy user expectations. Therefore, proper use of HRP is crucial for many service robotics applications.
Many human–human studies and human–robot studies have been conducted to find the natural proxemic preferences and behavior of humans. The main intention of the human–robot studies is to gather data and conclusions required for developing HRP awareness in service robots. Thus, the review examined and presented the key findings of HRP parameters and behavior identified in these studies. Moreover, a collection of key findings, which would benefit robotics researchers in developing HRP awareness in service robots, is gathered. The review has also highlighted the limitations of the current findings related to HRP and suggested potential explorations directions for future user studies.
Many developments of methods for establishing HRP awareness on service robots could be seen in the literature. The majority of the methods are built upon the outcomes of either human–human or human–robot proxemics studies, and the methods are capable of improving HRI to a certain extent. However, surprisingly less work has been conducted on the development of HRP awareness in service robots compared to the explanatory studies conducted to identify the natural HRP preferences of humans. The limitations of the existing systems have been identified, and possible future improvements have been suggested in this paper. This review concludes that there are promising opportunities for technology advances in this particular research niche and critical challenges to overcome.

Author Contributions

Conceptualization, S.M.B.P.S. and M.A.V.J.M.; methodology, S.M.B.P.S., M.A.V.J.M.; writing—original draft preparation, S.M.B.P.S., M.A.V.J.M.; writing—review and editing, A.G.B.P.J.; visualization, S.M.B.P.S., M.A.V.J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by University of Moratuwa Senate Research Grant Number SRC/LT/2018/20.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ISO 8373: 2012 (en); Robots and Robotic Devices–Vocabulary. International Organization for Standardization: Geneva, Switzerland, 2012.
  2. Kunze, L.; Hawes, N.; Duckett, T.; Hanheide, M.; Krajník, T. Artificial intelligence for long-term robot autonomy: A survey. IEEE Robot. Autom. Lett. 2018, 3, 4023–4030. [Google Scholar] [CrossRef] [Green Version]
  3. Broadbent, E.; Feerst, D.A.; Lee, S.H.; Robinson, H.; Albo-Canals, J.; Ahn, H.S.; MacDonald, B.A. How could companion robots be useful in rural schools? Int. J. Soc. Robot. 2018, 10, 295–307. [Google Scholar] [CrossRef]
  4. Varela-Aldás, J.; Miranda-Quintana, O.; Guevara, C.; Castillo, F.; Palacios-Navarro, G. Educational Robot Using Lego Mindstorms and Mobile Device. In Proceedings of the International Conference on Computer Science, Electronics and Industrial Engineering (CSEI), Ambato, Ecuador, 28–31 October 2019; pp. 71–82. [Google Scholar]
  5. Ngo, T.D. morebots: System development and integration of an educational and entertainment modular robot. In Proceedings of the 2017 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Ottawa, ON, Canada, 5–7 October 2017; pp. 74–80. [Google Scholar]
  6. Richardson, K.; Coeckelbergh, M.; Wakunuma, K.; Billing, E.; Ziemke, T.; Gomez, P.; Vanderborght, B.; Belpaeme, T. Robot Enhanced Therapy for Children with Autism (DREAM): A Social Model of Autism. IEEE Technol. Soc. Mag. 2018, 37, 30–39. [Google Scholar] [CrossRef] [Green Version]
  7. Khan, A.; Anwar, Y. Robots in Healthcare: A Survey. In Proceedings of the Science and Information Conference, Las Vegas, NV, USA, 2–3 May 2019; pp. 280–292. [Google Scholar]
  8. Aaltonen, I.; Arvola, A.; Heikkilä, P.; Lammi, H. Hello Pepper, May I Tickle You?: Children’s and Adults’ Responses to an Entertainment Robot at a Shopping Mall. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 53–54. [Google Scholar]
  9. Morris, K.J.; Samonin, V.; Anderson, J.; Lau, M.C.; Baltes, J. Robot magic: A robust interactive humanoid entertainment robot. In Proceedings of the International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, Montreal, QC, Canada, 25–28 June 2018; pp. 245–256. [Google Scholar]
  10. Muthugala, M.V.J.; Samarakoon, S.B.P.; Elara, M.R. Tradeoff Between Area Coverage and Energy Usage of a Self-Reconfigurable Floor Cleaning Robot Based on User Preference. IEEE Access 2020, 8, 76267–76275. [Google Scholar] [CrossRef]
  11. Samarakoon, S.B.P.; Muthugala, M.V.J.; Elara, M.R.; Kumaran, S. Toward Pleomorphic Reconfigurable Robots for Optimum Coverage. Complexity 2021, 2021, 3705365. [Google Scholar] [CrossRef]
  12. Muthugala, M.V.J.; Vengadesh, A.; Wu, X.; Elara, M.R.; Iwase, M.; Sun, L.; Hao, J. Expressing attention requirement of a floor cleaning robot through interactive lights. Autom. Constr. 2020, 110, 103015. [Google Scholar] [CrossRef]
  13. Azenkot, S.; Feng, C.; Cakmak, M. Enabling building service robots to guide blind people a participatory design approach. In Proceedings of the 2016 11th ACM/IEEE Int. Conf. Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 3–10. [Google Scholar]
  14. Al-Wazzan, A.; Al-Farhan, R.; Al-Ali, F.; El-Abd, M. Tour-guide robot. In Proceedings of the 2016 International Conference on Industrial Informatics and Computer Systems (CIICS), Sharjah, United Arab Emirates, 13–15 March 2016; pp. 1–5. [Google Scholar]
  15. Wang, S.; Christensen, H.I. Tritonbot: First lessons learned from deployment of a long-term autonomy tour guide robot. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 158–165. [Google Scholar]
  16. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Service robot implementation: A theoretical framework and research agenda. Serv. Ind. J. 2020, 40, 203–225. [Google Scholar] [CrossRef] [Green Version]
  17. de Graaf, M.M.; Allouch, S.B.; van Dijk, J.A. Long-term acceptance of social robots in domestic environments: Insights from a user’s perspective. In Proceedings of the 2016 AAAI Spring Symposium Series, Stanford, CA, USA, 21–23 March 2016. [Google Scholar]
  18. Yuan, W.; Li, Z. Development of a human-friendly robot for socially aware human–robot interaction. In Proceedings of the 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), Tai’an, China, 27–31 August 2017; pp. 76–81. [Google Scholar]
  19. Peshkin, M.; Colgate, J.E. Cobots. Ind. Robot. Int. J. 1999, 26, 335–341. [Google Scholar] [CrossRef]
  20. Yao, J.; Lin, C.; Xie, X.; Wang, A.J.; Hung, C.C. Path planning for virtual human motion using improved A* star algorithm. In Proceedings of the 2010 Seventh International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 12–14 April 2010; pp. 1154–1158. [Google Scholar]
  21. Song, K.T.; Chiu, Y.H.; Kang, L.R.; Song, S.H.; Yang, C.A.; Lu, P.C.; Ou, S.Q. Navigation control design of a mobile robot by integrating obstacle avoidance and LiDAR SLAM. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1833–1838. [Google Scholar]
  22. Choi, J.H.; Choi, B.J. Indoor Moving and Implementation of a Mobile Robot Using Hall Sensor and Dijkstra Algorithm. IEMEK J. Embed. Syst. Appl. 2019, 14, 151–156. [Google Scholar]
  23. Alagić, E.; Velagić, J.; Osmanović, A. Design of Mobile Robot Motion Framework based on Modified Vector Field Histogram. In Proceedings of the 2019 International Symposium ELMAR, Zadar, Croatia, 23–25 September 2019; pp. 135–138. [Google Scholar]
  24. He, W.; Li, Z.; Chen, C.P. A survey of human-centered intelligent robots: Issues and challenges. IEEE/CAA J. Autom. Sin. 2017, 4, 602–609. [Google Scholar] [CrossRef]
  25. Forer, S.; Banisetty, S.B.; Yliniemi, L.; Nicolescu, M.; Feil-Seifer, D. Socially-aware navigation using non-linear multi-objective optimization. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar]
  26. Truong, X.T.; Ngo, T.D. Toward socially aware robot navigation in dynamic and crowded environments: A proactive social motion model. IEEE Trans. Autom. Sci. Eng. 2017, 14, 1743–1760. [Google Scholar] [CrossRef]
  27. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. Identifying approaching behavior of a person during a conversation: A human study for improving human–robot interaction. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1976–1982. [Google Scholar]
  28. Gómez, J.V.; Mavridis, N.; Garrido, S. Social path planning: Generic human–robot interaction framework for robotic navigation tasks. In Proceedings of the 2nd International Workshop on Cognitive Robotics Systems: Replicating Human Actions and Activities, Tokyo, Japan, 3 November 2013. [Google Scholar]
  29. Karreman, D.; Utama, L.; Joosse, M.; Lohse, M.; van Dijk, B.; Evers, V. Robot etiquette: How to approach a pair of people? In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 196–197. [Google Scholar]
  30. Hall, E.T. The Hidden Dimension; Doubleday & Company Inc.: Garden City, NY, USA, 1966. [Google Scholar]
  31. Kim, Y.; Mutlu, B. How social distance shapes human–robot interaction. Int. J. Hum.-Comput. Stud. 2014, 72, 783–795. [Google Scholar] [CrossRef]
  32. van Houwelingen-Snippe, J.; Vroon, J.; Englebienne, G.; Haselager, P. Blame my telepresence robot joint effect of proxemics and attribution on interpersonal attraction. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 162–168. [Google Scholar]
  33. Rossi, S.; Ercolano, G.; Raggioli, L.; Savino, E.; Ruocco, M. The disappearing robot: An analysis of disengagement and distraction during non-interactive tasks. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 522–527. [Google Scholar]
  34. Mead, R.; Matarić, M.J. Robots have needs too: How and why people adapt their proxemic behavior to improve robot social signal understanding. J. Hum.-Robot Interact. 2016, 5, 48–68. [Google Scholar] [CrossRef] [Green Version]
  35. Mead, R.; Matarić, M.J. Proxemics and performance: Subjective human evaluations of autonomous sociable robot distance and social signal understanding. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5984–5991. [Google Scholar]
  36. Hediger, H. Studies of the psychology and behavior of captive animals in zoos and circuses. Science 1955, 124, 592. [Google Scholar]
  37. Argyle, M.; Dean, J. Eye-contact, distance and affiliation. Sociometry 1965, 289–304. [Google Scholar] [CrossRef]
  38. Aiello, J.R. A further look at equilibrium theory: Visual interaction as a function of interpersonal distance. Environ. Psychol. Nonverbal Behav. 1977, 1, 122–140. [Google Scholar] [CrossRef]
  39. Ciolek, T.M.; Kendon, A. Environment and the spatial arrangement of conversational encounters. Sociol. Inq. 1980, 50, 237–271. [Google Scholar] [CrossRef]
  40. Walters, M.L.; Dautenhahn, K.; Koay, K.L.; Kaouri, C.; Boekhorst, R.T.; Nehaniv, C.; Werry, I.; Lee, D. Close encounters: Spatial distances between people and a robot of mechanistic appearance. In Proceedings of the 5th IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan, 5 December 2005; pp. 450–455. [Google Scholar]
  41. Walters, M.L.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L.; Kaouri, C.; Woods, S.; Nehaniv, C.; Lee, D.; Werry, I. The influence of subjects’ personality traits on personal spatial zones in a human–robot interaction experiment. In Proceedings of the ROMAN 2005—IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 347–352. [Google Scholar]
  42. Rossi, S.; Staffa, M.; Bove, L.; Capasso, R.; Ercolano, G. User’s personality and activity influence on hri comfortable distances. In Proceedings of the International Conference on Social Robotics, Tsukuba, Japan, 22–24 November 2017; pp. 167–177. [Google Scholar]
  43. Obaid, M.; Sandoval, E.B.; Złotowski, J.; Moltchanova, E.; Basedow, C.A.; Bartneck, C. Stop! That is close enough. How body postures influence human–robot proximity. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 354–361. [Google Scholar]
  44. Eresha, G.; Häring, M.; Endrass, B.; André, E.; Obaid, M. Investigating the influence of culture on proxemic behaviors for humanoid robots. In Proceedings of the 2013 IEEE Ro-Man, Gyeongju, Korea, 26–29 August 2013; pp. 430–435. [Google Scholar]
  45. Takayama, L.; Pantofaru, C. Influences on proxemic behaviors in human–robot interaction. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 5495–5502. [Google Scholar]
  46. Walters, M.L.; Oskoei, M.A.; Syrdal, D.S.; Dautenhahn, K. A long-term human–robot proxemic study. In Proceedings of the 2011 RO-MAN, Atlanta, GA, USA, 31 July–3 August 2011; pp. 137–142. [Google Scholar]
  47. Mumm, J.; Mutlu, B. Human–robot proxemics: Physical and psychological distancing in human–robot interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 331–338. [Google Scholar]
  48. Walters, M.L.; Syrdal, D.S.; Koay, K.L.; Dautenhahn, K.; Te Boekhorst, R. Human approach distances to a mechanical-looking robot with different robot voice styles. In Proceedings of the RO-MAN 2008—The 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 707–712. [Google Scholar]
  49. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Jayasekara, A.G.B.P.; Elara, M.R. An exploratory study on proxemics preferences of humans in accordance with attributes of service robots. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–7. [Google Scholar]
  50. Trovato, G.; Paredes, R.; Balvin, J.; Cuellar, F.; Thomsen, N.B.; Bech, S.; Tan, Z.H. The sound or silence: Investigating the influence of robot noise on proxemics. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 713–718. [Google Scholar]
  51. Syrdal, D.S.; Dautenhahn, K.; Walters, M.L.; Koay, K.L. Sharing Spaces with Robots in a Home Scenario-Anthropomorphic Attributions and their Effect on Proxemic Expectations and Evaluations in a Live HRI Trial. In Proceedings of the AAAI Fall Symposium: AI in Eldercare: New Solutions to Old Problems, Arlington, VA, USA, 7–9 November 2008; pp. 116–123. [Google Scholar]
  52. Butler, J.T.; Agah, A. Psychological effects of behavior patterns of a mobile personal robot. Auton. Robot. 2001, 10, 185–202. [Google Scholar] [CrossRef]
  53. Zhang, J.; Janeh, O.; Katzakis, N.; Krupke, D.; Steinicke, F. Evaluation of Proxemics in Dynamic Interaction with a Mixed Reality Avatar Robot. In Proceedings of the ICAT-EGVE, Tokyo, Japan, 11–13 September 2019; pp. 37–44. [Google Scholar]
  54. Dubois, M.; Claret, J.A.; Basañez, L.; Venture, G. Influence of emotional motions in human–robot interactions. In Proceedings of the International Symposium on Experimental Robotics, Nagasaki, Japan, 3–8 October 2016; pp. 799–808. [Google Scholar]
  55. Walters, M.L.; Koay, K.L.; Woods, S.N.; Syrdal, D.S.; Dautenhahn, K. Robot to Human Approaches: Preliminary Results on Comfortable Distances and Preferences. In Proceedings of the AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics, Standford, CA, USA, 26–28 March 2007; p. 103. [Google Scholar]
  56. Koay, K.L.; Syrdal, D.S.; Ashgari-Oskoei, M.; Walters, M.L.; Dautenhahn, K. Social roles and baseline proxemic preferences for a domestic service robot. Int. J. Soc. Robot. 2014, 6, 469–488. [Google Scholar] [CrossRef] [Green Version]
  57. Walters, M.L.; Dautenhahn, K.; Woods, S.N.; Koay, K.L.; Te Boekhorst, R.; Lee, D. Exploratory studies on social spaces between humans and a mechanical-looking robot. Connect. Sci. 2006, 18, 429–439. [Google Scholar] [CrossRef]
  58. Ruijten, P.A.; Cuijpers, R.H. Stopping distance for a robot approaching two conversating persons. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 224–229. [Google Scholar]
  59. Ruijten, P.A.; Cuijpers, R.H. Do Not Let the Robot Get too Close: Investigating the Shape and Size of Shared Interaction Space for Two People in a Conversation. Information 2020, 11, 147. [Google Scholar] [CrossRef] [Green Version]
  60. Neggers, M.M.; Cuijpers, R.H.; Ruijten, P.A. Comfortable passing distances for robots. In Proceedings of the International Conference on Social Robotics, Qingdao, China, 28–30 November 2018; pp. 431–440. [Google Scholar]
  61. Li, R.; van Almkerk, M.; van Waveren, S.; Carter, E.; Leite, I. Comparing human–robot proxemics between virtual reality and the real world. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 431–439. [Google Scholar]
  62. Petrak, B.; Weitz, K.; Aslan, I.; Andre, E. Let me show you your new home: Studying the effect of proxemic-awareness of robots on users’ first impressions. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–7. [Google Scholar]
  63. Wojciechowska, A.; Frey, J.; Sass, S.; Shafir, R.; Cauchard, J.R. Collocated human-drone interaction: Methodology and approach strategy. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 172–181. [Google Scholar]
  64. Jensen, W.; Hansen, S.; Knoche, H. Knowing you, seeing me: Investigating user preferences in Drone-Human acknowledgement. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar]
  65. Duncan, B.A.; Murphy, R.R. Comfortable approach distance with small unmanned aerial vehicles. In Proceedings of the 2013 IEEE RO-MAN, Gyeongju, Korea, 26–29 August 2013; pp. 786–792. [Google Scholar]
  66. Yeh, A.; Ratsamee, P.; Kiyokawa, K.; Uranishi, Y.; Mashita, T.; Takemura, H.; Fjeld, M.; Obaid, M. Exploring proxemics for human-drone interaction. In Proceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, 17–20 October 2017; pp. 81–88. [Google Scholar]
  67. Koay, K.L.; Syrdal, D.; Bormann, R.; Saunders, J.; Walters, M.L.; Dautenhahn, K. Initial design, implementation and technical evaluation of a context-aware proxemics planner for a social robot. In Proceedings of the International Conference on Social Robotics, Tsukuba, Japan, 22–24 November 2017; pp. 12–22. [Google Scholar]
  68. Walters, M.L.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L.; Syrdal, D.S.; Nehaniv, C.L. An empirical framework for human–robot proxemics. In New Frontiers in Human-Robot Interaction; John Benjamins Publishing: Amsterdam, The Netherlands, 2009. [Google Scholar]
  69. Jang, J.S. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  70. Balasuriya, J.C.; Watanabe, K.; Pallegedara, A. ANFIS based active personal space for autonomous robots in ubiquitous environments. In Proceedings of the 2007 International Conference on Industrial and Information Systems, Peradeniya, Sri Lanka, 9–11 August 2007; pp. 523–528. [Google Scholar]
  71. Vitiello, A.; Acampora, G.; Staffa, M.; Siciliano, B.; Rossi, S. A neuro-fuzzy-bayesian approach for the adaptive control of robot proxemics behavior. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples, Italy, 9–12 July 2017; pp. 1–6. [Google Scholar]
  72. Bhagya, S.; Samarakoon, P.; Sirithunge, H.C.; Viraj, M.; Muthugala, J.; Buddhika, A.; Jayasekara, P. Proxemics and Approach Evaluation by Service Robot Based on User Behavior in Domestic Environment. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 8192–8199. [Google Scholar]
  73. Kukolj, D. Design of adaptive Takagi–Sugeno–Kang fuzzy models. Appl. Soft Comput. 2002, 2, 89–103. [Google Scholar] [CrossRef]
  74. Kosiński, T.; Obaid, M.; Woźniak, P.W.; Fjeld, M.; Kucharski, J. A fuzzy data-based model for Human-Robot Proxemics. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 335–340. [Google Scholar]
  75. Gao, Y.; Wallkötter, S.; Obaid, M.; Castellano, G. Investigating deep learning approaches for human–robot proxemics. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1093–1098. [Google Scholar]
  76. Mitsunaga, N.; Smith, C.; Kanda, T.; Ishiguro, H.; Hagita, N. Adapting robot behavior for human–robot interaction. IEEE Trans. Robot. 2008, 24, 911–916. [Google Scholar] [CrossRef] [Green Version]
  77. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. Replicating natural approaching behavior of humans for improving robot’s approach toward two persons during a conversation. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 552–558. [Google Scholar]
  78. Mead, R.; Matarić, M.J. Perceptual models of human–robot proxemics. In Proceedings of the Experimental Robotics, Essaouira, Morocco, 15–18 June 2016; pp. 261–276. [Google Scholar]
  79. Mead, R.; Matarić, M.J. Autonomous human–robot proxemics: Socially aware navigation based on interaction potential. Auton. Robot. 2017, 41, 1189–1201. [Google Scholar] [CrossRef]
  80. Henkel, Z.; Bethel, C.L.; Murphy, R.R.; Srinivasan, V. Evaluation of proxemic scaling functions for social robotics. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 374–385. [Google Scholar] [CrossRef]
  81. Mead, R.; Atrash, A.; Matarić, M.J. Automated proxemic feature extraction and behavior recognition: Applications in human–robot interaction. Int. J. Soc. Robot. 2013, 5, 367–378. [Google Scholar] [CrossRef]
  82. Eddy, S.R. What is a hidden Markov model? Nat. Biotechnol. 2004, 22, 1315–1316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Pacchierotti, E.; Christensen, H.I.; Jensfelt, P. Evaluation of passing distance for social robots. In Proceedings of the Roman 2006-the 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 315–320. [Google Scholar]
  84. Talebpour, Z.; Viswanathan, D.; Ventura, R.; Englebienne, G.; Martinoli, A. Incorporating perception uncertainty in human-aware navigation: A comparative study. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 570–577. [Google Scholar]
  85. Vega-Magro, A.; Manso, L.; Bustos, P.; Núñez, P.; Macharet, D.G. Socially acceptable robot navigation over groups of people. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 1182–1187. [Google Scholar]
  86. Repiso, E.; Garrell, A.; Sanfeliu, A. Robot approaching and engaging people in a human–robot companion framework. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 8200–8205. [Google Scholar]
Figure 1. Robot approaching a goal position while perceiving human actions and maintaining an appropriate distance with the person; (a): A person doing an exercise, (b): A person working on a laptop, and, (c): Two persons having a conversation.
Figure 1. Robot approaching a goal position while perceiving human actions and maintaining an appropriate distance with the person; (a): A person doing an exercise, (b): A person working on a laptop, and, (c): Two persons having a conversation.
Electronics 11 02490 g001
Figure 2. Taxonomy used in this paper to analyze the proxemics literature.
Figure 2. Taxonomy used in this paper to analyze the proxemics literature.
Electronics 11 02490 g002
Figure 3. Hall’s proxemic zones introduced in [30].
Figure 3. Hall’s proxemic zones introduced in [30].
Electronics 11 02490 g003
Figure 4. Six basic types of F-formation defined by Ciolek and Kendon [39].
Figure 4. Six basic types of F-formation defined by Ciolek and Kendon [39].
Electronics 11 02490 g004
Figure 5. An overview of the HRP study conducted in [49]. (a): with internal noise levels of the robot, (b): An anthropomorphic robot head, (c): A manipulator, and (d): A service robot.
Figure 5. An overview of the HRP study conducted in [49]. (a): with internal noise levels of the robot, (b): An anthropomorphic robot head, (c): A manipulator, and (d): A service robot.
Electronics 11 02490 g005
Figure 6. The experimental arrangement of the HRP study reported in [58].
Figure 6. The experimental arrangement of the HRP study reported in [58].
Electronics 11 02490 g006
Figure 7. The taxonomy of cases considered in the study [66].
Figure 7. The taxonomy of cases considered in the study [66].
Electronics 11 02490 g007
Figure 8. The set of locations defined around a user for determining the comfortable HRP by the poxemic planner proposed in [67].
Figure 8. The set of locations defined around a user for determining the comfortable HRP by the poxemic planner proposed in [67].
Electronics 11 02490 g008
Figure 9. An overview of the ANFIS proposed in [70].
Figure 9. An overview of the ANFIS proposed in [70].
Electronics 11 02490 g009
Figure 10. An overview of the system proposed in [71].
Figure 10. An overview of the system proposed in [71].
Electronics 11 02490 g010
Figure 11. Motivation behind the method proposed in [72]. (a): a small interpersonal distance is sufficient since body joints are not much extended and not moving fast. (b): large interpersonal distance is required since body joints are widely extended with a considerable speed.
Figure 11. Motivation behind the method proposed in [72]. (a): a small interpersonal distance is sufficient since body joints are not much extended and not moving fast. (b): large interpersonal distance is required since body joints are widely extended with a considerable speed.
Electronics 11 02490 g011
Figure 12. The topology of the deep learning network proposed in [75].
Figure 12. The topology of the deep learning network proposed in [75].
Electronics 11 02490 g012
Figure 13. The method proposed in [80] to adapt a robot’s behavior based on HRP.
Figure 13. The method proposed in [80] to adapt a robot’s behavior based on HRP.
Electronics 11 02490 g013
Figure 14. Proxemic aware passing strategy proposed in [83].
Figure 14. Proxemic aware passing strategy proposed in [83].
Electronics 11 02490 g014
Figure 15. HRP aware path planing strategy proposed in [85].
Figure 15. HRP aware path planing strategy proposed in [85].
Electronics 11 02490 g015
Table 1. HRP preferences revealed by human–robot explanatory studies.
Table 1. HRP preferences revealed by human–robot explanatory studies.
PaperUser AtrributesRobot AttributesContext
[40]*Children: 1.75 m
*Adults: <=0.5 m
No experience
Mechanistic appearance
(PeopleBot)
Robot toward human
Human toward robot
[41]Adults
No experience
0.45 m–3.6 m
Mechanistic appearance
(PeopleBot)
Robot toward human
Human toward robot
[42]Adults
No experience
*Extraversion: High—0.90 m, Low—0.85 m
*Conscientiousness: High—0.85 m, Low—0.95 m
*Agreebleness: High—0.87 m
*Neuroticism: High—0.95 m, Low—0.67 m
*Openness: High—0.90 m, Low—0.70 m
Pioneer 3DXRobot approach toward human
*User body posture:
Standing—0.95 m
Walking—0.96 m
Sitting—0.78 m
Laying—0.82 m
[43]Adults
No experience
New Zealand
Nao*Human toward a sitting robot:
Male—0.40 m, Female—0.30 m
*Robot toward a sitting human:
Male—0.30 m, Female—0.40 m
*Human toward a standing robot:
Male—0.55 m, Female—0.40 m
*Robot toward a standing human:
Male—0.40 m, female—0.45 m
[44]Adults
No experience
*German culture:
Robot–robot—0.42 m, Human–robot—0.86 m
*Arabic culture:
Robot–robot—0.4 m, Human–robot—0.66 m
NaoPlacing robots for conversation
[45]Adults
*Experience: No—0.34 m, 1 year—0.25 m
*Pet owner: Yes—0.39 m, No—0.52 m
PR2Robot toward human
*Robot’s gaze toward human’s head:
Female—0.30 m, Male—0.25 m
*Robot’s gaze toward human’s head:
Female—0.25 m, Male—0.30 m
[46]Adults
*Experience in weeks: 1—0.50 m, 2—0.43 m,
3—0.46 m, 4—0.43 m, 5—0.45 m, 6—0.51 m
Mechanistic appearance
(PeopleBot)
Robot toward human
Kitchen and living room
[47]Adults
*Pet owners prefer higher HRP
WakamaruHuman toward robot
*Mutual gaze:
Female—1.0 m, Male—1.1 m
*Avatar gaze:
Female—1.0 m, Male—1.0 m
[48]Adults
No experience and
experienced
Mechanistic appearance (PeopleBot)
*Natural male voice: 0.52 m
*Natural female voice: 0.60 m
*Synthetic voice: 0.80 m
*No voice: 0.42 m
Robot toward human
Human toward robot
[49]Adults
No experience
South Asian
*Facial emotion:
Happy—0.60 m, Sad—0.45 m,
Angry—1.22 m, Surprise—0.88 m,
Disgust—1.22 m, Fear—0.88 m
*Vocal emotions:
Happy—0.85 m, Sad—0.67 m,
Angry—1.27 m, Fear—0.82 m
*Noise:
00 dB—0.72 m, 53 dB—0.76 m,
57 dB—0.92 m, 62 dB—1.07 m
*Physical appearance:
MIRob—0.68 m, Robot head—0.67 m,
K3 manipulator—0.97 m, Fuzzbot—0.63 m
Human approach toward
robot for a conversation
[50]Adults
No experience
*Noise:
Regular sound (65 dB)—1.10 m
Silent—0.92 m
Mask sound—1.03 m
Human passing a robot
in a corridor
[51]Adults
No experience
*Appearance
Humanoid—0.62 m, Mechanoid—0.50 m
*Mental model
Human-like—0.57 m, Non-human—0.52 m
Robot toward human
Robot passing human
[52]Adults
No experience
*Robot upto human height: 0.55 m
*Robot upto knee height: 0.25 m
Robot approaching toward human
Robot passing human
[53]Adults
No experience
Mixed-reality avatar robot build in Pioneer 3-DX
Human following a robot:
*Speed 0.8 m/s: Avatar visible—1.30 m, Avatar invisible—1.25 m
*Speed 1.0 m/s: Avatar visible—1.42 m, Avatar invisible—1.38 m
*Speed 1.2 m/s: Avatar visible—1.50 m, Avatar invisible—1.55 m
Human avoiding a robot:
*Speed 0.8 m/s: Avatar visible—2.25 m, Avatar invisible—2.35 m
*Speed 1.0 m/s: Avatar visible—2.18 m, Avatar invisible—2.8 m
*Speed 1.2 m/s: Avatar visible—2.00 m, Avatar invisible—2.04 m
[54]Adults
No experience
Japanese
Pepper
*Body and facial emotions:
Happy—1.09 m, Neutral—1.18 m, Sad—1.37 m
Human toward robot
[55]Adults
No experience
Mechanistic appearance
(PeopleBot)
Robot toward human
*Seated on a chair in an open space:
*Standing on an open space:
*Seated at a table in an open space:
*Standing back against a wall
For all, front left, front right—most
comfortable, Right—least preferred
[56]Adults
*Short term: Prefer Side approaching
*Long term: No preference in direction
Care-O-botRobot toward human
*Delivering a drink: 0.5 m, front
*Delivering a hat: 0.5 m, side
[57]Adults
No experience
Mechanistic appearance
(PeopleBot)
Robot hand over an object
when human sitting on a chair
*Right: highest preferred
*Front: least preferred
[58]Adults
No experience
NaoRobot approach toward two
persons having a conversation
*Directions:
−70°: 0.92 m, −35°: 0.98 m,
35°: 1.05 m, 70°: 1.11 m
[60]Adults
No experience
Pepper*Human passing a robot
in a corridor: 1.1 m
[61]Adults
No experience
Pepper
*Virtual Reality (VR) and real robot:
VR—0.46 m, Real—0.53 m
*VR: Sound—0.40 m, No sound—0.52 m
Robot approach toward human
[62]Adults
No experience
VRCollaboratively explore a room
by a robot and a human
[63]Adults
No experience
Drone
Speed: 0.5 m/s
Trajectory: straight
Drone toward human
Front direction most comfortable
*1.2 m personal zone is preferred
[64]Adults
No experience
Drone
Height: 1.5 m, Speed: 0.7 m/s
*Drone toward human: 1.8 m
*Human toward drone: 1.6 m
[65]Adults
Experienced
AR100R drone
*Height: Short—0.62 m, Tall—0.63 m
Drone approach toward human
[66]Adults
*Gender: Female—1.5 m, Male—1.1 m
*Pet ownership: Yes—1.13 m, No—1.39 m
Drone
*Social shape:
Greeting voice—1.06 m
No greeting voice—1.14 m
*Non social shape:
Greeting voice—1.33 m
No greeting—1.38 m
Drone approach toward human
*Approaching height 1.2 m
Lateral distance: 0 m—-1.14 m,
0.3 m–1.02 m, 0.6 m–0.95 m
*Approaching height 1.8 m
Lateral distance: 0 m–1.35 m,
0.3 m–1.38 m, 0.6 m–1.27 m
The symbol ‘*’ indicates the factors/parameters varied for examining the HRP in each study.
Table 2. Summary of current status and potential future work of HRP explanatory studies.
Table 2. Summary of current status and potential future work of HRP explanatory studies.
Current StatusPotential Future Work
User Attributes
Studies are limited to user attributes
  • Gender (e.g., [43])
  • Age (e.g., [40])
  • Personality factors (e.g., [41,42])
  • Culture: Arabic and Germans (e.g., [44])
  • Experience (e.g., [45,46,47])
Study variation of HRP with respect to the emotional state (e.g., happy, angry, sad, etc.) of users.
Consideration of other cultures
Robot Attributes
Studies are limited to robot attributes
  • Voice characteristics (e.g., male, female, natural, synthetic) (e.g., [48])
  • Appearance (humanoid, mechanoid) (e.g., [49,51])
  • Height (e.g., [52])
  • Noise level (e.g., [49,50])
  • Facial emotions (e.g., [49,54])
  • Vocal emotions (e.g., [49,54])
Consideration of robot gender revealed from the appearance
Consideration of androids and zoomorphic robots for appearance.
Consideration of dynamic mechanical attributes such as speed of moving parts and temperature
Context
HRP has been studied based on the robot and user encounters related to
  • Robot approach toward user and vice versa (e.g., [40,41,46])
  • Approaching and passing (e.g., [53,60])
Studying of effect from items delivering/handovering by a robot is limited to delivery of hat or drink. (e.g., [57])
Considerations of user context limited to posture categories, sitting, standing, walking, and laying. (e.g., [42])
Encounters in open and constrained spaces for a user have been considered. (e.g., [55])
Effects on HRP in context of interactions with drones and VR settings have been studied. (e.g., [61,62,63,64,65,66])
Variation of HRP with different user context or activities such as having a meal and watching TV could be explored.
Variation of HRP with groups with multiple persons could be evaluated.
Consideration of HRP variation with environments such as living rooms, corridors, and kitchen or public and private places
Explore variation of proxemic preferences of users in accordance to privacy concerns of context.
Table 3. Summary of current status and potential future work of methods for establishing HRP on service robots.
Table 3. Summary of current status and potential future work of methods for establishing HRP on service robots.
Current StatusPotential Future Works
Scope
The existing work considers maintaining of HRP during
Limited to consideration of general-purpose service robots
Develop methods to maintain HRP during different service tasks
Consideration of HRP awareness for minimalistic robots such as floor cleaning robot
Interaction
Majority of work limited to single person cases (e.g., [67,71,72,74,75,83])
Multigroup cases are limited to the context of conversation. (e.g., [77,85])
User perceiving abilities are limited. e.g.,
  • Personality factors are perceived through questionnaires. (e.g., [71])
  • Wearable devices are used to recognize user activity. (e.g., [71])
  • RGB-D cameras are used to perceive user behavior. (e.g., [72])
Extending the methods to cope with multigroup cases in different activity contexts.
Use of vision-based inputs to determine required HRP.
Consideration of multimodal inputs for systems
Adaptation
The existing methods can adapt HRP based on
  • User’s activity defined by posture (e.g., [71])
  • User’s dynamic behavior (e.g., [72])
  • User’s personality factors (e.g., [71])
  • Robot’s attributes such as appearance and height.(e.g., [70])
  • Service task (limited to verbal or physical interaction) (e.g., [67])
Computational intelligence techniques are often used. e.g.,
  • Fuzzy logic (e.g., [72,74])
  • ANFIS (e.g., [70,71])
  • Probabilistic methods (e.g., [84])
Performance evaluation is often performed based on post hoc subjective measurements. e.g.,
  • Questionnaires (e.g., [71,77])
  • Comparison with empirical data (e.g., [72,74,75])
Facilitates the adaptation to entities such as
  • Cultural backgrounds
  • Emotions
  • Privacy concerns
  • Different service tasks
Consideration of multiple entities for determining HRP
Use of artificial intelligence techniques such as reinforcement learning to personalize HRP
Development/use of objective indexes for evaluating performance. e.g.,
  • Heart rate variation
  • Subconscious body signals
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. A Review on Human–Robot Proxemics. Electronics 2022, 11, 2490. https://doi.org/10.3390/electronics11162490

AMA Style

Samarakoon SMBP, Muthugala MAVJ, Jayasekara AGBP. A Review on Human–Robot Proxemics. Electronics. 2022; 11(16):2490. https://doi.org/10.3390/electronics11162490

Chicago/Turabian Style

Samarakoon, S. M. Bhagya P., M. A. Viraj J. Muthugala, and A. G. Buddhika P. Jayasekara. 2022. "A Review on Human–Robot Proxemics" Electronics 11, no. 16: 2490. https://doi.org/10.3390/electronics11162490

APA Style

Samarakoon, S. M. B. P., Muthugala, M. A. V. J., & Jayasekara, A. G. B. P. (2022). A Review on Human–Robot Proxemics. Electronics, 11(16), 2490. https://doi.org/10.3390/electronics11162490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop