Abstract
Behavioral design of robot is one of the concerns in the human-robot interaction [1, 2]. About the design of human-robot communicative interaction, there are lots of approaches have been presented for finding the preferable behaviors that are accepted by the people. In these studies, the users impressions of robots during interactions with them have been focused on the initiatives of the users, with users evaluating the response of the robot. Conversely, there have less studies on the evaluations on human impressions when a robot takes the initiative and performs active behavior towards a human. While creating events in which a robot explicitly performed active behavior, we reviewed human-robot interactions and presented our behavioral designs. Based on that, we implemented greeting functions for the robot. The objective of this study is to investigate the users’ impressions on the robot especially with the activeness of the robot. We examined the differences in their impressions depending on with or without of active behavior of robot. The results show significant differences in activity, affinity, and intentionality.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In recent years, robots that can communicate with people while interacting with them have become more common. There has been a call for increased development of robot technology, including things such as functional application in new areas, humanoid platforms and humanoid robots for entertainment. Prior to this study, studies of human-robot interactions suggested that we still need a large amount of study on evaluating the impressions that a robot leaves on people. Nakata et al. conducted an experiment on interpersonal valence based on the impressions people had after seeing the reaction a robot made after those touched the robot with their hands. The results revealed that when the robot interacted with those people in a way that showed a kind of emotional attachment to them, users had a positive impression of the robot [1]. Kakio et al. investigated the differences in user impressions depending on the reaction of a robot when they pushed the robot, which showed that they had different impressions depending on the robot’s reactions [2]. Many of these studies on user impressions have been conducted as follows: users took the initiative and did something, and a robot reacted to that. In a way, most of the studies focused on the passive behavior of the robots. When two people are communicating, two positions are assumed; one side is acting (the party which is currently active, as in speaking) and the other side is reacting to that (the party which is currently passive, as in listening). In previous studies, the robot side, which was the passive side, was not sufficiently studied. Thus, we defined behavior in which a robot takes the initiative and does something towards a human user as “active behavior”, and studied the changes in the users’ impressions depending on the presence or absence of that active behavior. In addition, the behavioral design model of our robot that performed active behavior was shown as an action state transition. There have been many studies on human-robot interactions [1–3, 8, 9]. However, there were not enough studies in which the behavior of a robot was determined to be “passive” or “active”, and then their basic behavior was demonstrated, as an action guideline, in the form of a status model. That’s what we attempted to do.
In this study, in accordance with our status model on active behavior, we conducted an experiment on greetings. We thought that with greetings, the intention of the robot would be easily understood, even if we included a variety of active behaviors. The reason is as follows: in a previous study, we investigated the movements of a robot that looked in the direction of a person, with settings for the robot’s looking direction control [3]. However, in the evaluation items, “activity”, “affinity” and “pleasantness” depending on the presence or absence of the looking direction control, statistically significant differences in impressions were not noticed.
The results didn’t reveal that active behavior had an effect. Certain behavior levels are required if it can be recognized as active behavior. Thus, we focused on “greetings” as active behavior, which might lead to improvements in evaluations on impression, and we mounted this on a robot. To perform this greeting behavior, we mounted two optical sensors on RAPIRO [4] to create contact interaction between a user and the robot, which would react when touched by a user. To allow for interaction prior to the greeting action, we implemented a looking direction control action and a greeting action, in which the robot raised and lowered its arms. In the experiment, we carried out a questionnaire survey on the impressions that users had on the robot’s behavior when it performed active behavior and when it didn’t. The results showed that there were significant differences in “affinity”, “activity” and “intent”. The result shows that in human-robot interactions, the active behavior of robot will obtain the better impression when we design and implement the expected action.
This paper is organized as follows. Section 2 shows a point of debate from our previous study. Section 3 shows the design and proposal of active behavior in humanoid robots. Section 4 shows the implementation, experiment and discussion. Section 5 shows the summary and discusses future challenges.
It should be noted that in this paper, the terms “interaction”, “contact interaction” and “communication” are used when we investigate the impressions that people have on robot actions or reactions, namely interaction and for correspondence and understanding, namely communication between robots and humans. In this study, greeting behavior was the main interaction; thus, we called interactions containing contact in the experiment “contact interactions” to reduce ambiguity in the paper.
2 Related Work
2.1 Hypothesis on Active Behavior
As the numbers of humanoid robots, including the ASIMO [5], Pepper [6], and Palro [7] models have become more and more common, a variety of study on human-robot interactions has been also conducted. Nakai et al. assumed that stuffed animals that have more realistic features might help to reduce boredom in interactions with users, and they studied the realism of stuffed animals that had fluctuation of the lights in their eyes [8]. In addition, Nakata et al. created a robot that showed receptive behavior (the ability to nod), repulsive behavior (dispelling behavior) and reactionless behavior with contact with a human hand, and investigated the affinity caused by interpersonal valence.
These prior studies have been conducted assuming an animal robot, and applications to a humanoid robot have not been fully considered. There is also some study on humanoid robots. Kakio et al. investigated the impression that users had on a robot’s reactions when they pushed the robot, with the aim that whether the robot could naturally express its situation through the reactive action to the users push [2]. Yano et al. investigated the differences in impressions that users felt depending on the speed that a robot raised its arms up and down [9]. Both pieces of study required the preceding active behavior of the user towards a robot; then the users evaluated the robot’s reaction to their actions. In a word, most of the previous studies are based on impressions in which the robots were reacting to users passively.
We attempted to explicitly understand the relationship between human-robot interactions using conventional studies determined to be “passive” or “active”, and we aimed to conduct evaluations on user impressions of the robot’s active behaviors. Eventually, based on these results, we aim to construct an interaction model that can be used with designing robot behavior.
2.2 Expectations with Active Robot
There is study in which users’ opinions, regarding what kind of situations they feel affinity or pleasantness toward a robot in, were organized using the KJ method [10]. This study organized the following opinions into five stages: “What kind of behavior do humans expect from a robot when they see it for the first time?” and “What kind of robot actions do humans think are interesting or pleasant?” (Table 1).
Actions 1 to 5 show that people have a variety of expectations when it comes to robot behaviors, with a relatively passive attitude. Especially in 4 and 5, they expect a robot to support their daily life and become their partners. Thus, we thought that when a robot can actively carry out the actions shown in the examples, there will be improvements to the items in our evaluation on impressions, including in “trust” and “affinity”. According to our definition, the active behavior of a robot means that it takes the initiative and does something for a user without prior commands or instructions from a user. This proactive action is similar to the spontaneous behavior in people. Since robots don’t naturally have spontaneity, in order to realize this kind of spontaneous behavior, some procedures were necessary. Here, observation of the state of a person, performed by transmission of information from a sensor, and an action based on that observation, are defined as “active behavior”, and we discuss this.
2.3 Effects of Active Behavior
Kanda et al. demonstrated examples of evaluations on impressions by using proactive actions such as a robot that has a looking direction function control action [3]. Kanda et al., investigated autonomous behavior in robots based on information from sensors without direct intervention from a user, so that the robot could leave an impression of intelligence on people that interacted with it. Their study is within the definition of active behavior shown in this present study. In the Kanda et al study, they set up a robot with a looking direction function control action and let the robot move freely back and forth in a hallway. They used the looking direction function control to have the robot look in the direction of students when it passed them. Then, the differences in the impressions left on the students were evaluated depending on the presence or absence of the looking direction function control. The results showed that there were no statistically significant differences in “activity”, “affinity” and “pleasantness” in the active behavior with the action of looking a direction of a user.
Considering the results, a robot’s “active behavior” does not necessarily lead to an improvement in one’s impression. We focused on the part that concluded [there were no statistically significant differences in “activity”, “affinity” and “pleasantness”] as a debatable point, and set up objectives and study tasks as written in the next chapter.
3 Design and Implementation
3.1 Issues and Objectives
We considered the reasons why there were no statistically significant differences in “activity”, “affinity” and “pleasantness”, and then we thought that the main reasons were as follows:
-
The looking direction control action did not satisfy a level of action that users expect from a robot, based on the KJ method results.
-
The intention of the robot’s actions was not clear to the users, so the users did not find any meaning in that action itself.
In this study, we assumed the behaviors that users expect on a robot based on results using the KJ method as the robot’s active behavior, as indicated in Sect. 2.2. We thought that improvements in impressions could be obtained when a robot actively carries out actions that users would expect. Thus, in this study, we decided to conduct evaluations on impressions of the active behaviors that people expect from robots in human-robot interactions.
3.2 Active Robot Behavioral Design
We defined those expected actions as the robot’s active behavior, which were sorted into groups based on the previous study indicated in Sect. 2.2. The numbers indicate the level of each active behavior. This time we set the action of greeting in the first stage as level 1 and carried out our behavioral designs. We defined the robot’s actions using the following three statuses: waiting for human actions (observation of human non-contact action), active behavior, and waiting for contact (observation of human contact action) (see Fig. 1).
Figure 1 (1)–(6) indicates the following:
-
(1) Detection of human behavior in a non-contact state (2) Detection of an action, and then activating the setting for active behavior. (3) Detection of an action, without activating active behavior. (4) Status changes from active behavior to waiting for contact. (5) Detection of human contact and the status of that contact, and reaction behavior for that contact (6) End of human contact and withdrawal.
As mentioned in Sect. 2.2, active behavior in a robot means that the robot takes action against a user proactively. However, robots do not have spontaneity, so in order to simulate it, we set a state of waiting behavior where the robot observes user’s actions while in a non-contact state. The transition to the next state happens as follows: the robot detects human action while in the non-contact state (1), that action triggers the robot’s active behavior (2), and if active behavior has been confirmed, the robot will move on to the next step of active behavior. If active behavior has not been confirmed (3), the robot will move on to a step of mutual contact behavior. After finishing its active behavior, it will move on to a state of waiting for contact (4), where it will wait for contact again, and when a user provides contact action, it will react again (5). If the contact from a user seems to be over (there is no reaction for a certain period of time), the robot will determine contact to be over and will withdraw and move from (5) to (6). We define this as a behavioral model in which active behavior is added to a robot that is performing normal operations. Next, we implemented concrete actions using this model.
3.3 Procedure of Greeting Action
In this study, we try to implement an active behavior greeting action, the type of action that we thought users would expect. Generally, the process of greetings between people unfolds as follows:
-
1.
One person notices the other person.
-
2.
He waves his hand or says something.
To give someone a greeting, the robot has to initiate the action. In order to do that, it has to observe the subject in a non-contact state, and next, some active behavior that corresponds to the observation is necessary. This is consistent with the behavior model shown in Sect. 3.2.
In this study, in order to imitate natural human communication, we included looking direction control actions and greetings as our active behaviors. More specifically, when the robot notices a person by detecting their approach, it will act accordingly (turning towards the person and making a looking direction control action), and after that, it will wave its hand. This allows the robot to demonstrate active behavior more explicitly. The details of the greeting action, in accordance with the state transition model in Fig. 1, the robot is assumed to take action according to the above model.
3.4 The Design of Distance in Greeting Actions
This time, greeting action was used as a step prior to human-robot interactions. We assumed that people would like to have interaction with the robot, so the robot should approach them; thus, we implemented looking direction control actions and greeting actions. It was necessary to determine the distance between them. We defined this distance in accordance with the definition of personal space. According to Edward Hall, personal space is classified into four zones, each further divided into a close phase and a far phase [11].
Based on that, we chose the close phase in what is defined as socially active distance (120–200 cm) as the zone for the robot when it detects the approach of a person. We chose the far phase in personal distance (75–120 cm) as the zone for the robot when it starts to turn towards a person while performing its looking direction control action. We chose the close phase in personal distance (45–75 cm) as the zone for the robot when performing the greeting and interacting face to face. According to Nishida [12], people normally have daily conversations in these interpersonal distances, from 50 cm to 150 cm. Thus, we determined that our settings were appropriate.
3.5 Implementation
In this section, we describe the implementation of robot that achieves the active behavior about the hardware and software.
Robot: We used RAPIRO for the robot implementation, which is a humanoid robot kit [4]. Total dimensions 250 × 200 × 155 (millimeters). 13 motors were mounted on its head, neck, shoulders, elbows, palms, waist, feet, and ankles. A dedicated arduino mounted in the RAPIRO operated these motors, which enabled it to act. In addition, LEDs were installed in its body, with colors that were adjustable by changing RGB values. Due to light emitting from its eyes, these color changes can be seen from outside the robot. In this experiment, we did not use the LEDs, because we thought that the color might affect the experiment results on greeting actions due to personal preferences for colors.
Sensors: For sensing the approach of a subject, two ultrasonic distance sensor modules (3 cm–4 m) were used. CDS cells (5 mm type) were used as optical sensors and attached to the RAPIRO body.
Operation settings: The following actions were set on the RAPIRO: (Fig. 2).
To return to the waiting state, the robot slightly spread both arms and both hands. It then repeated the cycle. In the looking direction control action, RAPIRO rotates its head 50 degrees to the direction where a subject is approaching, either left or right, and returns its head to the front direction after two seconds. In the greeting action when a subject stands in front of RAPIRO, it raises its right arm and waves its arm from left to right for two seconds, and then returns to the waiting state. The following are the three types of actions in the contact state:
-
It moves its left and right hands up and down alternatively.
-
It shakes its hands and feet.
-
It raises its arms and continues to move its arms from right to left.
These contact interactions were repeated for all subjects in the same manner, without adding randomness. This experiment aimed to compare user impressions with and without greeting actions. So, we decided that there should be no differences in the contact interactions.
4 Experimental Evaluation
We attempted to investigate whether or not there were improvements in user impressions in the situation where a robot takes the initiative and gives a greeting, prior to any interaction between the robot and the user. The detail of the experiment is as follows.
4.1 Experiment
The numbers of subjects were 10 students in our university: six of them were male and four of them were female. In this experiment, we divided the subjects into two groups of 5 people. The subjects on each team experienced both situations, meaning one situation where the robot performed greeting actions and one without them. Doing this let us examine the differences in impressions depending on the order of robot actions in the experiment.
-
Team A: With greetings → Without greetings
-
Team B: Without greetings → With greetings
The evaluation method is as follows. Each subject experienced both situations, with and without greetings, and answered a questionnaire immediately afterwards. They could select one of the following four levels for each question: I really thought so: 3; I kind of thought so: 2; I didn’t really think so: 1; I didn’t think so at all: 0. Questions 1 to 5 investigated “activeness”, “pleasantness”, “affinity”, “intentionality”, and “continuity” respectively.
-
Q1. Did you think the robot was active? (Activeness)
-
Q2. Did you think that the robot tried to entertain you? (Pleasantness)
-
Q3. Did you feel pleasant with the robot? (Affinity)
-
Q4. Did you feel that the robot had some sort of intention? (Intentionality)
-
Q5. Did you think that you would want to continue to play with the robot? (Continuity)
-
Q6. What do you think the robot’s action of waving its arms at you when you were approaching it meant?
-
Q7. Additional comments
This time we told the subjects that the robot would “initiate active behavior”, not specifically mentioning anything about “greeting actions”. So question number 6 was to gauge how well the subjects understood the intent of the robot’s active behavior.
The location was in a corridor with a 2 m width in front of our laboratory. The head motor on RAPIRO could not be moved up and down, so we put it on a table and adjusted its height so that its looking direction met the subject’s looking direction. The ultrasonic sensors detected the approach and departure of the subjects. We placed RAPIRO against the wall, and installed ultrasonic sensors around 160 cm to the right side and left side of RAPIRO respectively. The initial position of each subject was outside of the ultrasonic sensors on both sides, about 2.5 m from the robot. The starting position of the subject could have been either on the left side or the right side.
The initial state of the robot was set to observation of a person in a non-contact state. When it detected the approach of a subject, it started performing active behavior if the setting for active behavior was initialized; else it changed modes and started detecting contact if active behavior was not initialized. When it detected a subject leaving on the opposite side of the ultrasonic sensor, it changed modes to not detecting contact mode. It returned to its original state of observing people in a non-contact state. The optical sensor was incapable of reacting even if something touched it, so the only way to set the sensor off was if a subject approached the robot a certain way.
We told the subjects to walk at a certain speed, and stop one time in front of RAPIRO. We had the subjects practice that. This was achieved by applying a delay() to the series of actions with RAPIRO’s looking direction control to greet subjects face-to-face in the program. This time, we didn’t include a program in which RAPIRO would judge if a subject was standing in front of it. So if the subject walked at a certain speed, we thought we could adjust the timing from its looking direction control action to its greeting. The flow of the experiment is shown in Fig. 3.
Firstly, the subject stood outside of the ultrasonic sensors (either to the left or right), and started walking straight toward RAPIRO (1). Then, the ultrasonic sensors reacted, and RAPIRO started a looking direction control action (2). When the subject stopped in front of RAPIRO, it started its greeting action (3). When the subject touched RAPIRO’s optical sensor, it started contact interactions (4). The subject leaves RAPIRO (5). At that time, we instructed the subject to pass by the ultrasonic sensor on the opposite side. If there was no active behavior to be performed, RAPIRO skipped from (2) to (3).
Each subject performed this entire process (going and coming) 3 times for us. We programmed RAPIRO to ignore information from the ultrasonic sensor when a subject turned around, and also not to perform a looking direction control action at this time. We haven’t determined the time period of the contact interactions.
4.2 Result
Figure 4 summarizes the results of the evaluations on impressions with and without active behavior from Q1 to Q5 that shown in the previous section, in the graph. Statistically significant differences are respectively shown in the each question. Figure 5 shows the results of the questionnaire that each subject answered. Subjects from No. 1 to No. 5 belonged to team A, subjects from No. 6 to No. 10 belonged to team B.
Result1: Statistically significant differences were observed in the following fields: “activeness”, “pleasantness”, “affinity”, “intentionality” and “continuity”, all of which were used in the preliminary experiment (**<0.01).
Result2: Where experimental procedures were different, we noticed there were some differences in the evaluations of the subjects between the presence of active behavior and the absence of it from both teams.
Result3: “Intentionality” changed in accordance with the presence or absence of active behavior in the answers of 70 % of the 10 subjects in the experiment.
Result4: Among all the subjects, there were some subjects who had exactly the same impressions regardless of the presence or absence of active behavior.
We investigated whether or not there are differences in the evaluations depending on the order of their experiences, and with or without active behavior. We performed a dual variance analysis on the average evaluation scores of each indicator from each team. The results showed that there were no interaction effects in all of the indicators.
4.3 Discussion
This section shows the summary of our discussions. Discussions 1-4 correspond to results 1-4 as described in previous section.
Discussion 1: In the evaluations on impressions depending on the presence or absence of the robot’s active behavior, statistically significant differences were observed in all the following fields: “activeness”, “pleasantness”, “affinity”, “intentionality” and “continuity”. Therefore, implementing active behavior in a robot could be a factor that brings good results with impressions.
Discussion 2: We focused on the differences in the evaluations of each subject on the team. For subjects No. 3 and No. 4 from team A and subject No. 10 from team B, the differences in their evaluations were greater than other subjects on their team depending on the presence or absence of active behavior. The reason might be as follows: subjects No. 1, No. 2, No. 6, No. 7, No. 8 and No. 9 understood how RAPIRO works and functions in this experiment, and they already knew the method and range of operation. However, for subjects No. 3, No. 5, and No. 10, at the time of this experiment it was only their first or second time making contact with RAPIRO. So there might have been some differences in their prior knowledge. We believed that subjects No. 3 and No. 4 had stronger impressions toward RAPIRO’s active behavior than other subjects, so the differences between the presence or absence of active behavior were greater. Furthermore, subject No. 5 provided high evaluations both in cases with and without active behavior, even though it was also his first time to meet the robot. Subject No. 5 has a background in which he loves robot animation and robots themselves, which might have affected his evaluation.
Discussion 3: The active behavior we designed this time had a significant impact on the intentionality on the robot. We believed that the robot’s performance of that active behavior itself helped it to have intentionality. Furthermore, even without active behavior, its intentionality was not zero, due to the robot’s reaction in contact interactions. If there is neither active behavior nor contact interaction, the robot will not perform any action other than waiting, so we can expect that intentionality will be almost zero. However, since there was a reaction to contact, even though there was no active behavior it seemed that some evaluation points for intentionality were given.
Discussion 4: All of subject No. 6’s evaluations on impressions were exactly the same regardless of the presence or absence of active behavior. Some of subject No. 7’s answers to questions on their impressions were also same. Their answers for question 6 “what do you think the robot’s action of waving its arms at you when you were approaching it meant?” were as follows: “a robot was facing me and making some kind of sign” (No. 6), and no answer (No. 7). These results suggest that they understood the robot’s active behavior meant something, but they didn’t think it was making a greeting. Discussions 3,4 showed that it was important for users to understand the meaning of the robot’s actions exactly, rather than the robot just performing active behavior while subjects don’t understand. It was shown that enhancing a robot’s intentionality might lead to improvements in overall evaluations on impressions.
5 Conclusion
This study focused on the active behavior of a robot and demonstrated the robot’s active behavior design model. We implemented actions that users would expect a robot to make, in order to improve our subject’s evaluations of the robot. In the experiment, we implemented greetings as a level 1 active action, and compared evaluations on impressions depending on the presence or absence of those greeting actions. The results revealed that statistically significant differences were observed in the following fields: “activeness”, “pleasantness”, “affinity”, “intentionality” and “continuity”.
We should further investigate factors and/or actions that will bring improvements to impressions. In addition, in order to verify the behavioral model that we proposed this time, we would like to implement actions besides greetings to build more general robot behavioral models in terms of human-robot interactions.
References
Nakata, T., Sato, T., Mori, T., Mizoguchi, H.: Generating familiar behavior in a robot. J. Robot. Soc. Jpn. 15(7), 1068–1074 (1997)
Kakio, M., Miyashita, T., Mitsunaga, N., Ishiguro, H., Hagita, N.: How do the balancing motions of a humanoid robot affect a human’s motion and impression? J. Robot. Soc. Jpn. 26(6), 485–492 (2008)
Kanda, T., Ishiguro, H., Ishida, T.: Psychological evaluations of interactions between people and robots. J. Robot. Soc. Jpn. 19(3), 362–371 (2001)
RAPIRO official site (2015). http://www.rapiro.com/ja/
Honda-Robotics (2015). http://www.honda.co.jp/robotics/
Pepper (2015). http://www.softbank.jp/robot/special/pepper/
Palrogarden (2015). http://www.palrogarden.net/palro/main/framepage.html
Yuriko, N., Okazaki, R., Hachisu, T., Sato, M., Kajimoto, H.: Creating life-like effects in stuffed-toys using micro movements of reflected light in the toy’s eyes. In: Interaction, Information Processing Society of Japan (2015)
Yano, Y., Ikeda, Y., Okada, A., Nakano, M., Sugaya, M.: Transmitting emotion through movement in a robot’s hands. In: Multimedia, Distributed, Cooperative and Mobile DICOMO Symposium 2015, July 2015
Kiya, R.: An experiment evaluating robot actions targetting human symbiosis Master’s research report (2012)
Edward T.: Hall: The Hidden Dimension. Doub – leday (1966) (Translation: Toshitaka Hidaka and Nobuyuki Sato, 1970)
Nishida, K.: Construction planning, architecture and practical business based on the distance between people when they communicate, human psychology and ecology (1). Archit. Pract. Bus. 5, 95–99 (1985)
Acknowledgement
We would like to thank Tateishi Science Foundation, and MEXT/JSPS KAKENHI Grant 15K00105 for a grant that made it possible to complete this study.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Akiho, O., Sugaya, M. (2016). Impression Evaluation for Active Behavior of Robot in Human Robot Interaction. In: Kurosu, M. (eds) Human-Computer Interaction. Novel User Experiences. HCI 2016. Lecture Notes in Computer Science(), vol 9733. Springer, Cham. https://doi.org/10.1007/978-3-319-39513-5_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-39513-5_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39512-8
Online ISBN: 978-3-319-39513-5
eBook Packages: Computer ScienceComputer Science (R0)