As robots are deployed to work in our environments, we must build appropriate expectations of their behavior so that we can trust them to perform their jobs autonomously as we attend to other tasks. Many types of explanations for robot behavior have been proposed, but they have not been fully analyzed for their impact on aligning expectations of robot paths for navigation. In this work, we evaluate several types of robot navigation explanations to understand their impact on the ability of humans to anticipate a robot’s paths. We performed an experiment in which we gave participants an explanation of a robot path and then measured (i) their ability to predict that path, (ii) their allocation of attention on the robot navigating the path versus their own dot-tracking task, and (iii) their subjective ratings of the robot’s predictability and trustworthiness. Our results show that explanations do significantly affect people’s ability to predict robot paths and that explanations that are concise and do not require readers to perform mental transformations are most effective at reducing attention to the robot.
1 Introduction
As we introduce robots that perform tasks in human environments (e.g., giving tours at museums [7, 44], delivering food and medicine in hospitals [36], navigating shopping malls and office buildings [5, 27, 42], and driving [28, 29]), it is likely that many people will question the new technology. People may test its abilities (e.g., References [5, 20]) and monitor the robot’s activities in the environment rather than executing their own tasks (e.g., References [7, 27, 42, 44]) to build an understanding of the robot’s abilities and behaviors. The goal of much work in human-robot interaction (HRI) is to develop robot technologies that allow the people who live and work around the robots to avoid disruption and maintain their own productivity, trusting that their robots are acting appropriately.
Prior research has not only focused on how people can form accurate expectations of a robot’s behaviors and capabilities but also on how the robot can influence expectations and repair discrepancies (e.g., using dialogue) (for review, see Reference [39]). For robot path planning in particular, the explicability metric has recently been proposed to compare the discrepancy between a robot’s actual plan and an observer’s expectation of that plan [9]. The more explicable a robot’s path, the more likely it aligns with an observer’s prediction of the robot’s path. When a path cannot be explicable, Chakraborti et al. [11] propose explanation as a technique for mental model reconciliation. In other words, communicating information about the path can potentially help observers align their expectations with reality.
Providing explanations is increasingly popular in a variety of intelligent applications (e.g., References [2, 17, 32, 40]). Explanations of machine learning predictions have been shown to improve users’ predictions of behavior and increase reliance on the predictions [8]. The growth of research in explainable AI and robotics has extended the use of explanations to robot decision-making processes [45]. Feedback, including explanations [47] as well as emojis and color indicators [17], has been shown to help people accurately gauge AI and robot capabilities and increase trust [18, 48]. Recently, Chakraborti et al. [11] proposed several different explanation types, including model patch and minimally complete, that were shown to be helpful in mental model reconciliation in different navigation tasks.
With so many different potential types of explanations, there is an open question about whether some types of explanations align observer expectations better than others. We conducted a experiment to assess the roles that different explanations play in helping people predict a robot’s route for navigation. For each of six paths, we showed participants one of several different explanations that were informed by prior work and provided varying types of information and levels of detail that could help observers build expectations for the robot’s routes. First, we asked participants to re-create the path by clicking points on a map in the order that they expected that the robot would traverse them. We measured the similarity between the participants’ paths and the robot’s true path as a proxy for the alignment of their expectations with reality. We would expect that participants with more accurate predictions would have higher path similarity.
Second, we conducted a dual-task experiment to assess how participants allocated their attention to a robot’s navigation behavior versus their own task, given the explanation type. Participants in our study had to note whether the robot was entering crosswalks as it traversed a map while they simultaneously performed a dot-tracking task to their best ability. We used dot-tracking performance as well as speed and accuracy of identifying when the robot entered crosswalks to determine attention allocation between the two tasks. Based on the surprise (discrepancy in expectations) and attention link in psychology [25], we hypothesized that participants would spend less time attending to the robot task if they had more accurate predictions of robot behavior. Finally, participants provided subjective ratings of predictability and trust in the robot. This procedure was repeated for all six paths presented in a random order.
Our experiment was able to identify differences both in path predictability and in task prioritization based on explanation type. Compared to a baseline condition without any route information, participants who read explanations with route information were able to predict the path significantly more accurately and rated the robot’s predictability higher as well. Differences between explanation conditions were most apparent in the dual-task experiment. As the robot navigated, participants were able to spend more time tracking the dot when they had received just enough information to infer the path. Interestingly, extremely detailed explanations sometimes decreased performance even beyond the control condition that contained no path information. Based on these results, we argue that all explanations are not equal and that they should be designed to improve user expectations (and the ability to continue their tasks) by distilling relevant information that helps them anticipate the robot’s movement.
2 Related Work
One challenge of deploying robots is that users do not know what to expect from the robots’ behavior. Users test robot abilities (e.g., References [5, 20]) and monitor a robot’s behavior (e.g., References [7, 27, 42, 44]) to understand and explore its decision making and actions. One reason for this behavior is that observers find the robot’s behavior surprising in some way. Many studies have shown that people dwell on monitoring an event if they find it surprising (which is defined as expectation discrepancy), and they may even have trouble performing alternative tasks (i.e., their own work) while observing the surprising events [25]. Even though the surprises tested are instantaneous (showing an unexpected stimulus on a screen), the dwelling and action interference can last for several seconds. Explanations have been proposed in machine learning [8] and in robotics [10] as a way to align observer expectations with the behavior of the intelligent system, which can reduce surprise. We will compare several different explanations to understand their impact on observer expectations and task performance.
2.1 Explanations
It has long been suggested that context-aware systems should be able to explain to users “what they know, how they know it, and what they are doing about it” [4]. Research into context-aware predictive systems has found that people use explanations to clarify their uncertainties about what the system is doing or why [31, 32]. Many different algorithms have been proposed to provide that information to system users [2], and research has shown that people rely more on decisions of machine learning systems that are explained compared to ones that are not explained [8]. Recently, Amir and Amir [3] proposed a method of automatically summarizing previous behavior by agents that was found to help people more appropriately assess the agents’ skills.
More recently, it has been suggested that explanations could be one way to help people understand what to expect from autonomous robots as they act in human environments [45]. Many people have proposed different types of explanations for different types of robot behavior. For example, Hayes and Shah [23] proposed explaining why, why not, and how explanations of robot policies. Chakraborti et al. [11] proposed several types of explanations that provide the minimal information needed to “patch” an observer’s mental model or that are executable. Other work has proposed explaining the cost functions related to Markov Decision Processes to help people understand the trade-offs that their robots are making [43]. Specifically related to robot navigation, verbalization has been proposed to summarize robot paths at different levels of detail for different types of users to understand where the robot has been [41]. More recent developments suggest that there are ways to reduce the size of an explanation by finding important features in an abstract policy [46]. Our goal is to understand how different explanations of robot route navigation behavior impact user expectations of that behavior.
2.2 Route Instructions as Explanations
Rather than reinvent suitable explanations of robot route navigation, we draw significantly from the human-generated route instruction literature to develop our explanations of robot navigational routes. Much work has investigated route instructions and their ability (or inability) to help readers accurately navigate to desired goal locations. This research often refers to landmarks, used in the colloquial sense of “a prominent or conspicuous object on land that serves as a guide... a distinguishing landscape feature marking a site or location” [1]. Researchers have found that successful route instructions (those that readers can understand most frequently) include “minimal and essential landmark information” and a set of instructions for what to do at those landmarks [13, 15, 16]. What constitutes minimal, essential information is a reflection of the map or context. The specific selection of such key information can be critical to explanation quality, as is an explanation’s ability to contrast a described route from the alternatives [33]. In contrast, GPS navigation technology typically uses road names, distances, and turns. The prospect of maintaining valid databases of landmarks and updating them when changes occur is expensive, and road names change less frequently than landmarks [6]. Comparisons between instructions with and without landmarks have shown that people are significantly better at following instructions with landmarks than without them [14].
The question of what comprises minimal information seems to depend on prior knowledge of the environment. When readers are newer to the environment, they respond best to distance and landmark instructions as they develop an egocentric (first-person view) cognitive map of the environment [34]. However, more knowledgeable readers, who have developed a richer, allocentric (global bird’s-eye view) map with distances and routes between more landmarks, tend to think about instructions hierarchically as routes between well-known landmarks [34]. With similar reasoning, some people choose to provide turns in an allocentric way using cardinal directions (north, south, east, and west) while others give directions in an egocentric way as if the instructions were being executed by the reader (left, right) [12]. Interestingly, while “minimal” instructions of all necessary turns are sufficient, people tend to provide additional information at times when the path is “tricky” in some way or to help validate that the user is or is not traveling in the correct direction (e.g., “If you see the school, you’ve gone too far”) [24].
With some widely-understood patterns (e.g., use of landmarks) and some diversity in instructions (e.g., egocentric vs allocentric instructions, specificity of instructions), we developed several different explanation types to understand their impact on readers’ expectations of the routes they are reading about. Details about the particular route navigation explanations that we study are provided in Section 3.2.
2.3 Measuring Expectations of Robot Behavior
Understanding user expectations of robot behavior is a challenging task. Research often implicitly assumes that observers start with no knowledge of the robot behavior and that explanations should therefore completely describe the robot’s behavior [23, 41, 43]. In contrast, Chakraborti et al. [10] gave people a starting mental model (description of the problem) and then modified that model to test what people chose to explain. In this work, we provided people with varying, potentially incomplete descriptions of robot behavior and then measured their prediction of and surprise at the true executed behavior. This allowed us to test the impact of the explanations on the participants’ expectations directly rather than assuming what the reader interpreted from the explanation.
In particular, we gave participants two tasks that interfered visually [37] and measured on which task participants chose to focus. Work on surprise and attention has shown that people will take their focus off of one task to dwell on a surprising event [6]. Surprising events are those that deviate from an observer’s expectations; in our case, this would be a robot taking a path that is notably different than the one the participant expected. This idea is similar to measuring neglect tolerance—a human monitor’s ability to let a robot act on its own autonomously without watching its behavior—except that the participants in our study were required to monitor the behavior occasionally and the robot acted autonomously whether or not they monitored it [21]. By measuring the decrease in performance on one task in our study, we can understand how surprising the path is given the explanation.
3 Experimental Domain
To study the effects of explanation types on expectations of mobile robot behavior, we designed a navigation task for a Cozmo robot that participants could view in person or remotely. We used a city map rug to provide structure to the robot’s motions and labeled the locations on the map to indicate landmarks. Using the map and landmarks, we developed five explanations that described routes from place to place, four of which could be automatically generated from map data.
3.1 Robot Navigation
We used an Anki Cozmo robot, which is a small, expressive robot that moves across flat surfaces on two treads. It has a pixelated display to convey eye movements and expressions. Its software development kit (SDK) allows for programming a large range of behaviors, including navigation.
To give Cozmo a navigation context that participants would intuitively understand, we used a city map carpet (Figure 1). The carpet measured 4 ft by 6 ft and contained unlabeled streets and 18 labeled landmarks. Prior work indicates that people remember instructions better when grounded on landmarks, so this setup is sufficient to support our goals [13, 15, 16]. The streets contained crosswalks (white stripes). We created six paths in various directions for Cozmo to traverse that crossed five or six crosswalks each (Figure 1). The paths varied in their number of turns and directness. The paths contained crosswalks, because identifying crosswalks and obstacles moving in them are of key importance when a robot navigates real streets. Participants were instructed to monitor the robot as it traveled and indicate when it crossed a crosswalk to simulate training a crosswalk or obstacle detector.
Fig. 1.
To make navigable paths, we created a graph of streets, intersections, and landmarks and computed straight and turn actions. We used Cozmo’s SDK to pre-program the paths and create action lists computed from the graphs. The paths were open-loop (as the Cozmo was not actively sensing its location), but the robot was able to execute the paths while staying in the roadways if it moved slowly. Though the robot could be viewed in person for future experiments, we video-recorded its paths from above for this online experiment and displayed them at \(2.5\times\) speed to counteract the robot’s slowness.
3.2 Explanations
Many algorithms have been proposed for explaining or describing robot navigation behavior. For this context, we had to consider that our algorithm only has access to two attributes: a map with labeled landmarks to reference but no street names and a path (i.e., a sequence of actions from the start to the goal). Given the information available, we chose to focus our explanations on widely-trusted patterns of human-generated navigation instructions. We created four different route explanation types that we could implement automatically and one human-generated explanation. In total, there were six explanation conditions:
•
Baseline: beginning and end points only;
•
Human-Generated: beginning and end points with natural (human-generated) descriptions of the route;
•
Egocentric Turns: beginning and end points with relative directions (left, right) to three places that the robot passes;
•
Allocentric Turns: beginning and end points with cardinal directions (north, south, etc.) given to three places that it passes;
•
Passing By: beginning and end points with a list of places that it passes; and
•
Avoids: beginning and end points with places it avoids.
Egocentric Turns.
When describing a path, the robot can report its turns from its perspective using the first-person egocentric references left and right. A reader can then take the perspective of the robot and visualize where the robot will turn as it navigates the map. This is very similar to how a human would generate instructions for another person to follow [15].
To generate turn-by-turn directions, our explanation algorithm took as input the map as a graph that included landmarks and street intersections and the path as a sequence of intersections in the map (Algorithm 1, explanation Egocentric Turns). Landmarks included every building and park on the map, which could be identifiable by both a robot and a person. Given this information, the algorithm first extracted the start and end landmarks from the path and left the middle as a list of intersections to describe. In our map, the robot made turns around cars printed on the rug; thus, we extracted the key_turns that were most indicative of the overall direction of the robot (e.g., the ones that are not immediately preceded or succeeded by another turn). The algorithm used get_landmarks to determine the landmarks at those intersections. If the entire path were to be described instead, then the algorithm would need to get_landmarks from the middle of the path rather than key intersections. If the number of landmarks were again reduced, then it sampled N points (intersections). Finally, the turns were extracted from the sampled intersections by computing the directions from one to the next, and the description of turns and landmarks was concatenated together sequentially from the start, through the middle, and then the end landmarks in path order to generate the explanation.
An example of our algorithm’s output instructions for path 3 were “The robot will start at the Park, then turn right at the Snacks, then turn right at the Restaurant, then turn left at the Movie Theater and stop at the Mechanic.”
Allocentric Turns.
Prior work has shown that some people prefer to write and to read directions with allocentric turns (using cardinal directions north, south, east, and west) [12]. As a result, we included a map-perspective condition in which the same explanations were given for the egocentric turns, but instead of extracting the left/right turns, we extracted the map-perspective allocentric cardinal directions representing the straight-line direction towards the next key intersection. In this case, the Algorithm 1 explanation Allocentric Turns output “The robot will start at the Park, then head northeast past the Snacks, then head south towards the Restaurant, then head southwest towards the Movie Theater and stop at the Mechanic.”
Passing By.
The robot’s path could possibly be inferred by mentally connecting the landmarks through the streets rather than explicitly noting turns. This is equivalent to more “expert” human-generated instructions in which the writer assumes that there is allocentric spatial knowledge about how to reach one landmark from another [34], which is true in our case as the participants will be given the map to view. The reduction in explanation size may help the explanation be more memorable than longer explanations if participants can visualize the path that connects the landmarks.
This algorithm concatenated the start and end locations and then listed the landmarks that Cozmo passed in order. While this pattern does not follow the previous ones structurally, it sounded more natural in English. To continue with our previous example, the algorithm would output “The robot will start by the Park and go to the Mechanic, passing the Snacks, Restaurant, and Movie Theater on its way there.”
Avoids.
Finally, sometimes it is more informative to know where the robot will not go rather than where it will go (e.g., Reference [30]). This is similar to the idea in the route instructions literature in which people provide context about what indicates that a person has missed a turn [24], though we take it to an extreme because our paths are very short and the map is small. In this condition, the algorithm listed landmarks that the robot never passed on its route (on any road adjacent to the landmark). By constraining the streets that the robot would not use, it was possible to determine an approximate path. This condition required additional mental effort to eliminate the avoided streets, but it was the same length as the Passing By explanation and might have given participants a more intuitive idea of the robot’s path.
In this condition, the robot extracted all landmarks surrounding the middle of the path using get_landmarks and removed those landmarks from the set of all landmarks in map to find the ones that were avoided. Those landmarks were sampled if necessary, ideally ensuring that the sampled landmarks were dispersed around the path to increase the number of useful street constraints. The algorithm finally concatenated the list of avoided landmarks with the start and end landmarks as in the Passing By case: “The robot will start by the Park and go to the Mechanic, taking a route that avoids the Pet Store, Fire Station, and Bus Lane.”
Study Implementation. We constrained the number of landmarks that each algorithm could mention to three based on pilot tests indicating that having three landmarks (plus the start and end locations) was complex enough to describe the paths on our map and was not too long. On larger maps or paths with more turns, we would expect there to be more landmarks described; future work should explore these conditions. We note again that sometimes our robot moved out of the way of a “car” on the map but continued generally moving in the same direction, so while our generated explanations do not take these small turns into account, they do generally provide the path to the goal. For the three conditions in which the landmarks were on the path route, we used the same random (though distributed along the path such that one occurred in each one-third of the distance) sample to reduce variability in explanations. During a real deployment, this would be unlikely to happen.
We tested these four explanations against two alternatives - a Baseline condition only listing the start and end locations (e.g., “The robot will start by the Park and go to the Mechanic”) and a Human-Generated explanation condition. These explanations were created by two members of the research team. We instructed them to also use three landmarks and the terms “start” and “ending at,” but they could otherwise use any language. Unprompted, they used some more complex language like “u-turn” and turn “before” a landmark. There was no specific effort to make these instructions similar or dissimilar to the other explanation types. The six Human-Generated explanations are listed in Table 1.
Table 1.
Path
Human-Generated Explanation
1
The robot will start at the Bus Lane, turn right before the Fire Station, turn left before the Day Care, and turn right at the Restaurant before ending at the Pet Store.
2
The robot will start on the north side of the Park, turn right at the Snacks, head through the roundabout by the Fountain, and then pass the Pet Store before ending at the Movie Theater.
3
The robot will start at the Park, make a U-turn at the Snacks, and head past the Restaurant before navigating toward the Movie Theater and ending by the Mechanic.
4
The robot will start at the Bus Station, navigate around the Flower Shop, go past the Day Care, and then turn left by the House before ending at the Snacks.
5
The robot will start at the Pet Store, turn right after the Movie Theater, then turn right after the Drive In, and navigate into the roundabout by the Police before ending at the School.
6
The robot will start at the Movie Theater, head through the roundabout and around the Fountain, pass the Day Care, turn right by the Drive In, and end at the Car Wash.
Table 1. Human-generated Explanations of the Six Robot Paths
3.3 Hypotheses
We had two hypotheses about the effects of explanation type on participants’ path predictions, their ability to perform two simultaneous tasks, and subjective measures of trust in the robot. First, we expected that providing any explanation for the robot’s behavior would increase these three measures compared to Baseline (H1). Second, we predicted that shorter and more commonly occurring (i.e., passing by or egocentric turns) explanations would increase the aforementioned three measures more than longer, complex explanations with turns in them (H2).
4 Study Design
To measure the effect of explanations on user expectations of and attention to (surprise) robot behavior, we performed a dual-task experiment in which we asked participants to simultaneously track a dot moving on the screen and monitor the robot as it navigated on the map. Participants were randomly assigned to one of six explanation conditions. They saw six paths, all using the same explanation condition, presented in a random order. We measured participant performance on both tasks, and they provided subjective trust ratings on a questionnaire given afterwards. We evaluated how receiving different explanations impacted our measures.
4.1 Dot-tracking Task
In the dot-tracking task, participants were asked to use their mouse cursor to track a quickly moving red dot 50px wide (Figure 2(b), left) on a 600 \(\times\) 450px area. The dot was programmed to navigate to goal x and y locations within the bounded area, bouncing off of walls when necessary. The dot position updated one pixel on both axes once every 0.2 s. Each time the dot got to the x or y goal, it computed a new goal on that axis and moved again. As a result, the dot frequently changed directions at random rather than moving predictably, making it harder to track.
Fig. 2.
The dot started moving when the participant pressed the space bar. As feedback during the task, participants received 1 point every 0.2 s if their cursor was anywhere on the dot and lost 1 point if it was not. The speed and unpredictability of the dot meant that the task required substantial concentration to receive positive points. Participants needed a positive score (\(\gt\)0) during all six paths to be included in the dataset.
4.2 Robot Navigation Task
In the robot navigation task, participants were told to press the “z” button each time the robot entered a crosswalk. They were not told how many crosswalks to expect. Participants viewed a 600 \(\times\) 680px video of a real Cozmo navigating over the city carpet, recorded from above (Figure 2(b), right). Participants must occasionally look at the robot’s progress throughout the video to perform this task successfully. The robot is small and white, and the multi-colored map makes the robot difficult to find at a brief glance. Thus, participants had to completely shift their visual attention between the dot-tracking task and this task to complete both tasks with reasonable success. They could look at the video less frequently if they could predict the path and anticipate where the crosswalks would occur. The surprise-attention literature says that participants will tend to look at the video more frequently and dwell on it longer if they are surprised by the path or unsure of where the robot will travel next.
When the space bar was pressed, the video started playing and the dot started moving on the adjacent panel. Participants’ key presses were logged. If they pressed “z” during a crosswalk, then they received 1,000 points on their crosswalk score. They could only receive the points once per crosswalk. We removed participants who did not press “z” for at least one crosswalk on every path.
4.3 Procedure
After providing informed consent, participants completed a questionnaire asking for their age, gender, occupation, native language, country where they spent the most time as a child, whether they had any robots or pets and if so what kind(s), and the Ten-Item Personality Inventory [22].
Next, participants were given an overview describing both tasks. The overview specifically included the participants’ goal for the experimental trials: “You will earn points for each correct button press and also based on how long your mouse stays on the red dot. Your goal is to earn the best combined score of tracking the dot and training the robot for each robot path. You will be disqualified for dot-tracking scores lower than 0 or less than 500 points for crosswalks on each page.”
Then, participants were taken to a practice website with more detailed instructions about the task procedure. First, they were told that they would see a city map with labeled landmarks and a description of a robot’s path through that map. The instructions said, “Study the explanation and answer the questions on the next page.” They were given an example explanation next to the map and told upon advancing to the next page, “Click 10 points in order along the path that you think the robot will take from the start to the end.” When they were ready to continue, participants were asked to click the start and end locations of the path and then click to place dots on 10 intermediate points showing their predicted path. If they got the start and/or end landmark(s) wrong, then they were instructed to try again and reread the explanation (and then repeat the clicking). We did not check the predicted paths for accuracy during the experiment. Next, participants were told that they should track the red dot with their mouse and practice pressing the “z” button at any time (interface shown on Figure 2(b)). They pressed the space bar to start and could click “Next” to move to the next page once they had pressed “z” at least once. Participants were reminded not to navigate away from the website during the tasks as the tasks would continue and they would be disqualified for missing data.
After completing the practice trial, the participants started the study. The six paths were shown in a random order. For each path, the participant (1) read the explanation, (2) clicked the start and end landmarks and 10 intermediate points along the predicted path, and (3) pressed space to start the two tasks. (The exact instructions said, “Start the experiment. Track the dot using the mouse and press ‘z’ when the robot crosses crosswalks. Press ‘space’ to begin.”) When the video stopped playing, a “Next” button appeared for continuing to the next path. When all six paths were completed, participants were taken to a final survey that included 13 rating questions and a request to estimate the total number of crosswalks that the robot crossed.
4.4 Measures
For each path, our website logged participant ID, path ID, path order, every click and key press on the interface (including the start/end and predicted path points), the location of the dot and mouse, and the time of the video every 0.2 s while the dot/robot was moving. At the end of each path (including the practice), we logged the data to a file on our server. From these log files, we computed the measures that we used to analyze differences in trust in experimental conditions:
•
Mouse-on-dot Proportion: the proportion of time that the mouse was on the dot, which was computed for different time segments:
–
Crosswalks: proportion of time that the mouse was on the dot while robot was on the crosswalks;
–
Turns: proportion of time that the mouse was on the dot while the robot was turning (shown in our pilot data to attract gaze to the robot);
–
Control: proportion averaged across all other times;
•
Z-press proportion: proportion of crosswalks in a path that were noted with a “z” press within a +/–0.5 s expanded window around the robot being on the crosswalk (i.e., allowing minor timing errors);
•
Average time-to-z-press: the first time that z was pressed during the expanded window for each crosswalk, averaged across all crosswalks that were identified in the path.
Path Integral: Additionally, we computed the user’s expectation of the robot’s path using their 10 clicks by connecting the clicks in sequence. We found that about 2% (20 paths out of 180*6 = 1,080) of the clicked paths were noisy, where participants clicked in the order they remembered or inferred parts of the path instead of in order from the beginning to the end. We corrected the paths by sorting the points from top to bottom or vice versa, and we smoothed the sorted paths to ensure that they never went backwards away from the goal. To do this, we first aligned points vertically that were within 150px of each other so that they shared a y value and then computed the longest sequence of points with increasing y values. We then computed the integral as the distance from the smoothed paths to the robot’s path at each pixel height from the top to the bottom of the map. The corrections improved (reduced) path integral scores for all participants.
Subjective Trust: The rating questions at the end of the experiment used 7-point scales from Strongly Disagree to Strongly Agree. Participants rated the following statements: “I am confident in the robot” (confidence); “The robot is dependable” (dependability); “The robot is reliable” (reliability); “The robot’s behavior can be predicted from moment to moment” (predictability); “I can count on the robot to do its job” (count on); “The robot was malfunctioning” (malfunctioning); “I trust this robot” (trust this robot); “I trust robots in general” (trust general); “I will not trust robots as much as I did before” (not trust general); “I could not focus on the tracking task, because the robot needed my attention” (not focus); “I spent more time watching the robot than on the tracking task” (time on robot); “It was hard to complete the tracking task while watching the robot” (hard to track); and “If I did this task again, I would spend more time watching the robot” (future time on robot). Some items assessing trust were adapted from Jian and colleagues [26] (confidence, reliability, predictability, trust this robot) and Muir [35] (dependability, reliability, predictability).
4.5 Participants
We recruited 242 participants from the Prolific.co recruitment website and paid them 5USD for their time. To participate, site users had to have identified themselves as 18 years of age or older, fluent in English, and having normal or corrected-to-normal vision. Data from 47 participants had to be removed for technical issues that were fixed partway through data collection. In order for the remaining individuals’ data to be included, each participant had to press “z” no more than 50 times throughout the experiment, identify at least one crosswalk correctly per map, have a dot score greater than zero for every map, and not navigate away from the page or lose connectivity for more than 10s or for any period of time that overlapped with a crosswalk on each map. Thirteen participants were eliminated for these issues.
In total, data from 182 participants were included in the analysis. Of these participants, 120 were male, 61 female, and 1 genderfluid; 82 were ages 18–23 years, 42 were 24–29, 43 were 30–39, 10 were 40–49, and 5 were 50–59; and 50 said they spent most of their childhood in North America, 10 in South America, 115 in Europe, and 7 on other continents. There were 30 participants who successfully completed each condition except for Baseline, which had 32. This research was approved by our Institutional Review Board.
5 Results
We used the measures described above to evaluate the effects of the explanations on user expectations and attention. For each statistical analysis, we performed a Restricted Maximum Likelihood (REML) test on 1,092 observations with Explanation, Path, and their interaction as fixed effects and with ParticipantID and nested Path as a random effect.
User Expectations.
We analyzed differences in Path Integral based on the explanation type. Figure 3(a) shows the mean numbers of pixels (and standard deviations) enclosed between the two paths out of a total of 3.5M total pixels comprising the image. (A lower integral is better as there are fewer pixels separating the paths.) The average Path Integral over all of the data was 504,034. Good estimates were under 400,000 pixels, and the worst estimates were over 1M pixels.
Fig. 3.
Explanation had a significant effect on the Path Integral (F(5,5) = 51.12, \(p\lt 0.001\)). A post-hoc Tukey’s Honestly Significant Difference (HSD) test with an \(\alpha = 0.05\) showed that the Baseline condition (M = 840,361, s.d. = 148,089) was significantly worse than all of the other explanations (M = 431,000, s.d. = 148,081). Additionally, the Passing By explanation (M = 373,926) was significantly better than the Avoids explanation (M = 494,001). No other differences were found.
Path also significantly affected Path Integral (F(5,5) = 47.50, \(p\lt 0.001\)). A post-hoc Tukey HSD test showed that participants were significantly worse at predicting paths 5 (M = 779,238, s.d. = 116,112) and 6 (M = 625,955) compared to the others (M = 398,750). Those two paths were less direct from the start to the goal than the others, and participants who did not understand or remember the explanation were not able to infer it from the start and the goal.
Finally, the interaction between Explanation and Path was significant (F(25,25) = 15.04, \(p \lt 0.001\)). A Tukey HSD test focused on the effects of Explanation for each Path. For Path 3, the Baseline (M = 635,142) was significantly worse than Passing By (M = 331,324) and Avoids (M = 330,513). For Paths 1, 4, and 5, the Baseline was significantly worse than all other Explanations. We found no significant effects of Explanation on Paths 0 and 2.
Based on these findings, we conclude that an explanation does help align user expectations of robot behavior.
Dual-task Performance.
Next, we analyzed measures of dual-task performance and interference (Crosswalk Mouse-on-Dot proportion, shown in Figure 3(b)). If participants were attending to the robot task, then their performance on this metric should have dropped. Our REML analysis of Mouse-on-Dot proportion during crosswalks found that Explanation type was statistically significant (F(5,5) = 4.83, \(p \lt 0.01\)) and Path was statistically significant (F(5,5) = 2.54, \(p \lt 0.05\)), while the interaction effect was not statistically significant. A Tukey HSD post-hoc analysis of Explanation type showed that Passing By (M = 83.62% on dot, s.d. = 4.26%) was significantly better than the Baseline (80.36%), Allocentric Turns (79.64%), and Human-Generated explanations (78.63%). No other pairs were statistically significant. During turns and control areas, our REML analyses also found that Explanation type was statistically significant (F(5,5) = 4.62 and F(5,5) = 4.98, respectively, \(p \lt 0.01\)), while Path and the interaction effect were not statistically significant. A Tukey HSD post-hoc analysis of turns showed that Passing By (M = 85.63%, s.d. = 3.12%) was significantly better than Avoids (83.06%), Allocentric Turns (82.88%), and Human-Generated explanations (82.25%). Similarly, the Tukey HSD post-hoc analysis of the control times also showed that Passing By (86.17%, s.d. = 3.23%) was significantly better than Avoids (83.25%), Allocentric Turns (83.34%), and Human-Generated explanations (82.72%). Additionally, Egocentric Turns (85.13%) were significantly better than Human-Generated explanations. Overall, we find that Passing By explanations significantly improve dot tracking compared to Baseline during crosswalks and relative to other explanations at other times. Participants who had that explanation spent more time tracking the dot rather than observing the robot’s behavior.
We tested whether more dot tracking resulted in worse robot tracking. We first analyzed the average time to the first z-press per path. Path significantly affected z-press time (F(t,t) = 44.80, \(p \lt 0.001\)), Explanation had a marginally significant effect (F(5,5) = 2.07, \(p = 0.067\)), and there was no significant effect of the interaction between the two dependent variables. In order from shortest to longest, Path 2 z-press times were significantly lower (M = 2.44 s, s.d. = 0.14 s) than Paths 0 and 1 (M = 2.55 and 2.59, respectively), which in turn were significantly lower than Path 3 and 4 (M = 2.7 and 2.72, respectively), which in turn were significantly lower than Path 5 (M = 2.91). On inspection of the paths (Figure 1), this order reflects the complexity of the paths and their directness from the start to the goal. A test of the z-press proportion did not show any statistical significance from any of our independent variables. Participants were good at identifying crosswalks at the required times—over 95% for each explanation type and each path.
Overall, participants chose to allocate attention to the robot monitoring task at the cost of their performance on dot tracking.
Subjective Trust.
Finally, we evaluated each participant’s final survey to understand how the explanations affected their perceived trust and predictability. We performed a factor analysis on all 13 questions to determine which responses were correlated and how individual items could be grouped into scales. Four factors were identified: Credibility (F1) included confidence, reliability, dependability, predictability, count on, malfunctioning (reverse scored), and trust this robot; Distraction (F2) included not focus and hard to track; General Robot Trust (F3) included trust general and not trust general (reverse-scored); and Future Attention (F4) included future time on robot and time on robot. Of these four factors, only Credibility (F1) showed a significant effect of Explanation condition (Wilcoxon/Kruskal-Wallis test for non-parametric data, \({\chi }^2 = 12.144, p = 0.033\)). Pairwise comparisons were performed using the Steel-Dwass method to correct for multiple comparisons and found that the Baseline condition was significantly worse than Passing By and than Egocentric Turns (all p \(\lt 0.05\)); no other significant differences were found.
Examining the individual questions within F1, Wilcoxon/Kruskal-Wallis tests showed that Explanation condition had a significant main effect on confidence (\({\chi }^2 = 16.069, p = 0.0067\)) and predictability (\({\chi }^2 = 21.556, p = 0.0006\)). For confidence, Steel-Dwass pairwise comparisons showed that Baseline received significantly lower scores than Avoids and than Egocentric Turns. For predictability (Figure 3(c)), Baseline received significantly lower scores than Passing By, Egocentric Turns, and Human-Generated. No other individual questions showed significant main effects of explanation (\(\alpha = 0.05/7\) with Bonferroni corrections).
6 Discussion
In general, explanations helped participants predict the robot’s behavior compared to not having an explanation, supporting H1. Participants in the Baseline condition (with no explanation for the robot’s path) were significantly worse at predicting the robot’s true route compared to all those who were given an explanation, even though the path explanations only provided three landmarks along the path rather than a complete route. Also, they rated the robot’s credibility (and specifically its predictability) as lower than many other explanation conditions. These results suggest that explanations do align users’ expectations of robot behavior. Despite these differences, Baseline condition participants did perform moderately well on the dot-tracking task, indicating that the lack of an explanation did not interfere with their ability to complete this secondary task. It is possible that these participants put more effort into their tasks because of their lack of information.
Among the various explanation conditions, participants in the Passing By condition frequently performed significantly better on our metrics than those in other explanation conditions. Participants in the Passing By condition were better able to predict the robot’s path than those in the Avoids condition and performed best at tracking the dot throughout the robot’s path, including during crosswalks, turns, and control periods. The Egocentric Turns explanations often elicited good responses—they were not significantly worse than Passing By explanations on any metric—but participants who received these explanations did not demonstrate statistically better responses than the other explanation conditions. The other three explanations—Human-Generated, Allocentric Turns, and Avoids—did not elicit consistent patterns of results.
These results partially support H2: the Passing By explanations (the shortest and least complex) outperformed Allocentric Turns and Human-Generated explanations, which were longer and less familiar, as well as the Avoids condition, which was the same length but required more complex mental transformations to deduce the route. While Egocentric Turns explanations were longer and could require more mental effort to interpret lefts and rights, the familiar format (similar to typical human-generated route instructions) likely reduced these effects.
Interestingly, participants who were given the Human-Generated explanations often performed significantly worse than those in the other explanation conditions, particularly for dual-task measures with the dot-tracking task. This suggests that humans do not always produce the best explanations. Finding a balance between the length and complexity of the explanations seem to be the key to helping users understand robot routes. Future work is needed to understand how these competing needs should be balanced in a variety of domains.
Our assessment of user expectations while the robot was navigating was measured through the proxy of attention. The psychological literature on the surprise-attention link led us to hypothesize that participants with less accurate expectations of robot behavior would be more likely to be surprised by the robot’s actions and would therefore dwell on watching the robot navigate rather than attending to their dot-tracking task. Our findings do indicate that the Passing By explanations did lead to higher dot-tracking proportion than others and that the Passing By explanation also led to significantly lower path integrals than other explanations. However, our findings were unable to show a more direct link between the path integral score and the mouse-on-dot proportion across the conditions. Some of this inconsistency may be due to the similarities across our explanation conditions as well as the general interest in the robot, especially its slow turns. Other issues may include the study setup: the online study did not allow us to monitor gaze directly to understand what the participants were focusing on. Further work is needed to tease apart the various contributors to the differences in attention across conditions.
Our research was limited by the context in which it occurred. Originally, this experiment was planned as an in-person laboratory study; however, we quickly had to shift to using an online platform because of local COVID-19 restrictions. As a result, we relied on videos of the robot navigating the map. This shift may have made the robot more difficult to track visually. A variety of online technical issues (i.e., an overloaded server and a bug in saving the responses) led to the loss of potential data. Additionally, more participants had to be eliminated for not following our instructions (i.e., pressing buttons too many times or getting distracted and leaving the website) than is typical for our in-person research. While the online study may better reflect virtual monitoring tasks, we believe that physically turning one’s head to see a real robot would increase the impact of the monitoring on the dual task (dot tracking, in our case). Additionally, a live robot may raise the perceived stakes and importance of monitoring the robot, which may further impact the results.
Our decision to test many different explanation conditions was driven by the numerous possible ways to present similar route information. The complexity of language as well as the seemingly conflicting results from prior work made it difficult to predict which conditions would be most beneficial to users; thus, we compared many of them. In addition to the language used, another question when designing our explanations was how long the explanations should be. We used a pilot study to determine that the directions to three landmarks would be best for our domain of moderate length routes.
Our results provide additional support for the multifaceted nature of user expectations and trust in explanations. Like prior work studying explanation content [38] and length [19], we focused on the impact of a variety of types of explanations on user expectations. We measured robot trust and explicability both through task measures and through subjective surveys. Our dual tasks and wide array of questionnaire items revealed a number of contrasts between participants’ performance and their perceptions of their experiences. Overall, we conclude that explanations have a significant impact on users’ abilities to predict, monitor, and neglect an autonomous mobile robot, allowing them to maintain their own productivity and have better subjective experiences.
Finally, given that route navigation explanations require domain and location-specific language, the generalizability of this research needs to be tested further. The current research on city street navigation explanations should be expanded to include different maps and environments, more complex and longer paths, physical 3-dimensional obstacles, and varying landmark density. Further analysis should also include the specificity of individual explanations: for any map or route, unique characteristics will affect whether a given number of landmarks in an explanation describes one versus multiple potential paths. Within route navigation more generally, further studies are needed to examine how the navigation location (i.e., office building versus road) impacts the length and language requirements as well. While our current research looked at the path integral as a measure of actual and predicted path similarity, more work needs to be done in a variety of domains to understand whether explanations should help in identifying the exact path (as opposed to one on close-by parallel streets, for example) and what impact that has on explanation length. Previous work on route navigation explanations often focused on self-driving vehicles. To expand upon work looking at explanation content [38] and length [19] to include these vehicles, researchers could examine different styles of route information and how they set passengers’ expectations and impact trust. To generalize this work beyond the route navigation domain, similar explanation generation methods could be tested for contexts such as task planning for pick-up and drop-off tasks for robots in warehouse, hospital, or office settings that also include landmarks to describe progress.
7 Conclusion
One challenge in deploying mobile robots in human environments is the impact that they have on the productivity of the people around them. They often require monitoring for possible failures, which can be distracting for people trying to focus on their own work. Explanations have the potential to help people understand and predict a mobile robot’s route, making it easier for people to trust the robot to work autonomously for longer periods of time and allowing them to focus on their own tasks.
We designed a dual-task study to test the influence of different types of route explanations on participants’ abilities to allocate their attention to both a robot monitoring task and a dot-tracking task. We found that explanations significantly improved participants’ ability to predict the route that a robot would take and also their perceptions of the predictability of the robot. Additionally, we found that explanations that were more concise and familiar significantly improved participants’ ability to perform their own task while monitoring the robot at appropriate times. We conclude that explanations help people update their expectations about robot behavior, allowing people to be more productive while also monitoring their robots.
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 582.
Dan Amir and Ofra Amir. 2018. Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’18). 1168–1176.
Victoria Bellotti and Keith Edwards. 2001. Intelligibility and accountability: Human considerations in context-aware systems. Hum.–Comput. Interact. 16, 2–4 (2001), 193–212.
Dan Bohus, Chit W. Saw, and Eric Horvitz. 2014. Directions robot: In-the-wild experiences and lessons learned. In Proceedings of the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’14). International Foundation for Autonomous Agents and Multi-Agent Systems, 637–644.
Tad T. Brunyé, Stephanie A. Gagnon, Aaron L. Gardony, Nikhil Gopal, Amanda Holmes, Holly A. Taylor, and Thora Tenbrink. 2015. Where did it come from, where do you go? Direction sources influence navigation decisions during spatial uncertainty. Quart. J. Exper. Psychol. 68, 3 (2015), 585–607.
Wolfram Burgard, Armin B. Cremers, Dieter Fox, Dirk Hähnel, Gerhard Lakemeyer, Dirk Schulz, Walter Steiner, and Sebastian Thrun. 1998. The interactive museum tour-guide robot. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’98).11–18.
A. Bussone, S. Stumpf, and D. O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In Proceedings of the International Conference on Healthcare Informatics. 160–169.
Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, and Subbarao Kambhampati. 2019a. Explicability? Legibility? Predictability? Transparency? Privacy? Security? The emerging landscape of interpretable agent behavior. In Proceedings of the International Conference on Automated Planning and Scheduling. 86–96. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/3463.
Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, and Subbarao Kambhampati. 2019b. Plan explanations as model reconciliation—An empirical study. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE, 258–266.
Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2017. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17). 156–163.
James M. Dabbs, E.-Lee Chang, Rebecca A. Strong, and Rhonda Milun. 1998. Spatial ability, navigation strategy, and geographic knowledge among men and women. Evol. Hum. Behav. 19, 2 (1998), 89–98.
Marie-Paule Daniel and Michel Denis. 1998. Spatial descriptions as navigational aids: A cognitive analysis of route directions. Kognitionswissenschaft 7, 1 (1998), 45–52.
Michel Denis. 1997. The description of routes: A cognitive approach to the production of spatial discourse. Cahiers de Psychologie Cognitive 16 (8 1997), 409–458.
Michel Denis, Francesca Pazzaglia, Cesare Cornoldi, and Laura Bertolo. 1999. Spatial discourse and navigation: An analysis of route directions in the city of Venice. Appl. Cogn. Psychol. 13, 2 (1999), 145–174.
Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction. IEEE Press, 251–258.
Na Du, Jacob Haspiel, Qiaoning Zhang, Dawn Tilbury, Anuj K. Pradhan, X. Jessie Yang, and Lionel P. Robert. 2019. Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transport. Res. Part C: Emerg. Technol. 104 (2019), 428–442.
Na Du, Kevin Y. Huang, and X. Jessie Yang. 2019. Not all information is equal: Effects of disclosing different types of likelihood information on trust, compliance and reliance, and task performance in human-automation teaming. Human Factors. Retrieved from.
Rachel Gockley, Allison Bruce, Jodi Forlizzi, Marek Michalowski, Anne Mundell, Stephanie Rosenthal, Brennan Sellner, Reid Simmons, Kevin Snipes, Alan C. Schultz, and Jue Wang. 2005. Designing robots for long-term social interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. 1338–1343.
Michael A. Goodrich and Dan R. Olsen Jr. 2003. Seven principles of efficient human robot interaction. In IEEE International Conference on Systems Man and Cybernetics, Vol. 4. 3943–3948.
Samuel D. Gosling, Peter J. Rentfrow, and William B. Swann Jr. 2003. A very brief measure of the Big-Five personality domains. J. Res. Personal. 37, 6 (2003), 504–528.
Bradley Hayes and Julie A. Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, 303–312.
Stephen Hirtle, Kai-Florian Richter, Samvith Srinivas, and Robert Firth. 2010. This is the tricky part: When directions become difficult. J. Spatial Info. Sci. 2010, 1 (2010), 53–73.
Jiun-Yin Jian, Ann M. Bisantz, and Colin G. Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4, 1 (2000), 53–71.
Christopher Kohl, Marlene Knigge, Galina Baader, Markus Böhm, and Helmut Krcmar. 2018. Anticipating acceptance of emerging technologies using Twitter: The case of self-driving cars. J. Bus. Econ. 88, 5 (2018), 617–642.
S. Li, R. Scalise, H. Admoni, S. S. Srinivasa, and S. Rosenthal. 2017. Evaluating critical points in trajectories. In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’17). 1357–1364.
Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195–204.
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119–2128.
Bonita Marlene Muir. 1989. Operator’s Trust in and Use of Automatic Controllers Supervisory Process Control Task. Ph.D. Dissertation. University of Toronto.
Ali Gurcan Ozkil, Zhun Fan, Steen Dawids, Henrik Aanes, Jens Klestrup Kristensen, and Kim Hardam Christensen. 2009. Service robots for hospitals: A case study of transportation tasks in a hospital. In Proceedings of the IEEE International Conference on Automation and Logistics (ICAL’09). IEEE, 289–294.
Luke Petersen, Lionel Robert, Jessie Yang, and Dawn Tilbury. 2019. Situational awareness, driver’s trust in automated driving systems and secondary task performance. SAE Int. J. Connect. Autonom. Vehicles 2, 2 (2019), 129–141.
Preeti Ramaraj, Saurav Sahay, Shachi H. Kumar, Walter S. Lasecki, and John E. Laird. 2019. Towards using transparency mechanisms to build better mental models. In Proceedings of the 7th Goal Reasoning Workshop: Advances in Cognitive Systems, Vol. 7. 1–6.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135–1144.
Stephanie Rosenthal, Sai P. Selvaraj, and Manuela M. Veloso. 2016. Verbalization: Narration of autonomous robot experience. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’16). 862–868.
Stephanie Rosenthal and Manuela M. Veloso. 2012. Mobile robot planning to seek help with spatially-situated tasks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’12), Vol. 4. 1.
Roykrong Sukkerd, Reid Simmons, and David Garlan. 2018. Towards explainable multi-objective probabilistic planning. In Proceedings of the 4th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS’18).
Sebastian Thrun, Maren Bennewitz, Wolfram Burgard, Armin B. Cremers, Frank Dellaert, Dieter Fox, Dirk Hahnel, Charles Rosenberg, Nicholas Roy, Jamieson Schulte, et al. 1999. MINERVA: A second-generation museum tour-guide robot. In Proceedings of the IEEE Conference on Robotics and Automation, Vol. 3. IEEE.
Suzanne Tolmeijer, Astrid Weiss, Marc Hanheide, Felix Lindner, Thomas M. Powers, Clare Dixon, and Myrthe L. Tielman. 2020. Taxonomy of trust-relevant failures and mitigation strategies. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI’20). Association for Computing Machinery, New York, NY, 3–12.
Nicholay Topin and Manuela Veloso. 2019. Generation of policy-level explanations for reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 2514–2521.
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI’16). IEEE, 109–116.
X. Jessie Yang, Vaibhav V. Unhelkar, Kevin Li, and Julie A. Shah. 2017. Evaluating effects of user experience and system transparency on trust in automation. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. ACM, 408–416.
Previous work has shown that honeybees use a snapshot model to determine a local vector to find their way home. A simpler, average landmark vector model has since been proposed for biologically-inspired mobile robot homing. Previously, the authors have ...
Robot navigation is one of the basic problems in robotics. In general, the robot navigation algorithms are classified as global or local, depending on surrounding environment. In global navigation, the environment surrounding the robot is known and the ...
In this paper, a navigation system for autonomous mobile robots is proposed. Our navigation system is a hybrid of behaviour-based and model-based navigation systems. In our system, a behaviour-based subsystem is in charge of low-level reactive actions, ...
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].