Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Do Humans Trust Robots that Violate Moral Trust?

Published: 14 June 2024 Publication History

Abstract

The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.

1 Introduction

Trust has been introduced as a substantial factor that needs to be taken into consideration in designing and implementing robots for various applications [27]. For example, when robots are going to work as teammates in human-robot teams [20], when robotic agents are designed to be used as autonomous agents [53], or when robots are going to be used in a complex and dangerous situation and take the place of the humans in the high-risk tasks [40, 48]. Trust is one of the most paramount factors determining how much humans would accept a robotic agent [42]. Humans’ perception of automatic systems is that they are much less prone to failure than humans [18]. In the context of trust, it is common for the trustor to anticipate that the trustee will take actions aimed at minimizing or mitigating risks for the trustor in a given situation. The focus of this anticipation, encompassing what the trustee is expected to do or embody, aligns with the concept of trustworthiness—the attributes of a trustee that elicit or rationalize the trustor’s confident expectations [61]. There is a concern that humans would not use or rely on an automated system if they believe the system is not trustworthy [30].
Advancements in human-robot interaction (HRI) and the autonomy of robots have revolutionized the field of human-robot trust. Prospective robotic agents are not intended to be tools humans deploy for performing tasks; they are intended to work as social agents working and interacting with humans [27]. With the increasing use of robots in social applications, research on human-robot trust must go beyond conventional definitions of trust, which consider human-robot relations identical to human-automation relations. Many researchers believe trust has a multidimensional nature [19, 31, 34, 46]. Human-automation interaction (HAI) researchers have introduced multidimensional trust measures for assessing human trust in automation using the definitions of trust in human-human interaction (HHI) [36, 54]. Reference [25] is an example of those measures that have been developed for trust in HAI and are very well received and approved by HAI researchers and have been employed in many human-robot trust studies. There are also some trust scales developed by HRI researchers that account for the multidimensional nature of trust between humans and robots [3, 33, 38, 50]. Each of these researches introduces different dimensions for trust in HRI, some of which overlap. Still, they are referred to by different names.
Our work on trust in human-robot interaction builds upon the extensive range of human-robot trust measures that consider the multidimensional nature of trust. In particular, we draw inspiration from Ulman’s trust measure, which explicitly incorporates a moral component and comprehensively captures the moral aspect of trust in robots across three distinct dimensions. Based on the multidimensional conceptualization of trust introduced by Ullman et al. [33], there are two main aspects of trust between humans and robots:
(1)
A performance aspect, similar to trust in the human-automation interaction, where the main question is “does the agent performance meet the user expectations?” The performance aspect of trust is constituted by two dimensions: Reliability and Competence.
(2)
A moral aspect, which is similar to trust in the human-human interaction where the main question is “does an agent take advantage of human vulnerability because of a lack of moral integrity?” This aspect of trust comprises three dimensions: Transparency, Ethics, and Benevolence [33, 61].
Considering this conceptualization of trust introduction of subjective scales for measuring these two aspects of trust [61], the question arises whether a robot violating different trust aspects would affect overall human-robot trust differently.
Research Question: How does the effect on human trust in a robot differ when two identical robot failures occur, with one failure resulting from a moral trust violation and the other from a performance trust violation?
Our study investigates the effects of performance and moral trust violation by a robot on human trust. We are specifically interested in observing if a robot’s undesirable behavior due to a performance trust violation would affect human trust differently than a robot’s undesirable behavior due to a moral trust violation. We designed and developed an online human-robot collaborative search game with different scores and bonus values that participants can gain. With the help of these elements, we designed two types of trust violations by the robot: a performance trust violation and a moral trust violation. We recruited 100 participants for this study from Prolific [41].
The contribution of this work is three main folds:
Investigating whether individuals perceive robots as intentionless agents, thereby dismissing the notion of robot morality, or perceive them as agents with intentions and acknowledge the potential for robots to possess morality.
Introducing a game design that can be used to distinguish between the robots’ performance and moral trust violations.
Assessing and comparing the effects of performance and moral trust violations by robots on humans.

2 Background

The study of error situations in HRI is still new, and there are still many unexplored angles to it. The introduction of multidimensional conceptualizations of trust in HRI also raises more issues regarding robot failure that need to be addressed. In this section, we review available studies on the effects of robot failure on human trust, different dimensions of trust, and the effects of violating different trust dimensions on human trust.

2.1 Effects of Robot Faulty Behavior on Human Trust

Trust in human-robot interaction is an active area of research, and it gained more attention in recent decades. Considerable studies have investigated the effects of diverse factors that influence human-robot trust [15, 27, 37]. Hancock et al. performed a meta-analysis on the factors affecting human-robot trust introduced by various studies. They found that, among all factors affecting trust in HRI, robot-related factors, especially robot performance, are most intensely associated with trust [22]. Therefore, we can say trust in human-robot interaction has been mainly influenced by robot performance. As such, we believe trust in a robotic agent may be compromised after faulty behavior.
Desai et al. performed numerous studies on the effects of robot performance on human trust [13, 14]. Their work mainly focuses on the impact of a robot’s performance on a user’s decision to let the robot operate in an autonomous mode or take over the robot’s control. Robinette et al. investigate the impact of robot reliability on an individual’s decision to follow the robot’s directions in an emergency evacuation scenario [48]. They reported a tendency in participants to over-trust a robot regardless of the robot’s previous faulty behaviors. Salem et al. used a human’s decision to comply with a robot’s commands as a sign of trust in the robot. They experimented to determine the effect of robot failures on human compliance with unusual requests of the robot [49]. They found that participants still completed the odd request made by the robot despite errors.
In a competitive scenario studied by Ragni et al., a robot was programmed to either make some mistakes and exhibit limited memory or to perform correctly continuously [45]. They reported that participants perceived the faulty robot as less intelligent and reliable. However, participants reported that the task was more manageable and commented more positively about the task when the robot showed faulty behavior.
Perceptions of robot failure are not consistent in different contexts. For example, Mirnig et al. reported no significant differences in perceived anthropomorphism or intelligence of a social robot performing a task precisely or making faulty behavior occasionally [35]. Nevertheless, experiment participants liked the erroneous robot significantly better than the flawless robot, which can be interpreted as faulty behaviors that can cause social robots to be perceived as more natural.
The summarized studies reveal numerous factors influencing how individuals perceive a faulty robot, such as the severity of the failure [49] or the task and interaction type [45, 48]. Moreover, there appears to be a capacity to which faults are tolerated, even considered as more favorable and likable [35], and can even have a calibration effect on human performance and help to regulate the tradeoff between performance and satisfaction in human-robot interaction [45].

2.2 Multi-dimensional Nature of Trust in HRI

In the human-robot trust literature, performance-based trust has been the main focus of most studies. The main focus of research on trust in HRI is on factors related to robot performance, capabilities, and mission environments, where components are extensively addressed in terms of rational features such as the robot’s reliability and anthropomorphism or the mission types and complexity [22, 27]. However, the impact of robot personality has obtained less attention [19]. There are fewer concerns about systems being deceptive, betraying, or exploiting humans in human-automation trust literature than the system performance concerns [51]. However, the focus of the studies in the field of human-human trust is more on the affection-based and moral trust aspects. One of the primary questions underlying human-human trust is: “whether a human agent will take advantage of another human’s vulnerability either unconsciously because of a lack of ability or consciously because of a lack of moral integrity” [44].
The multidimensional conceptualization of trust in HRI that incorporates both factors related to the performance and personality of the robot better matches the current state of trust in human-robot interaction. Some researchers have studied different dimensions of trust between humans and robots, and some developed trust scales for measuring these dimensions. Schaefer [50] proposed four categories of trust in robots: the propensity of trust, affect-based trust, cognition-based trust, and trustworthiness. They also developed a 40-item comprehensive questionnaire accounting for all these trust dimensions. Bartneck et al. [3] developed and tested measurement scales for five key concepts affecting human-robot trust, including anthropomorphism, likeability, animacy, intelligence, and perceived safety of robots. Cameron et al. [7] highlighted the multidimensional nature of trust in HRI. They recognized that trust is not a singular, uniform concept but rather comprises various dimensions. However, their work did not offer a specific scale for measuring these dimensions, which was later addressed by Bernotat and colleagues. Bernotat, Eyssel, and Sachse expanded on the multidimensional nature of trust in HRI by distinguishing between cognitive and affective trust. Cognitive trust refers to the belief in the robot’s competence, reliability, and functionality. In contrast, affective trust relates to the emotional bond and sense of security a user feels with the robot. They developed a scale to measure these two components, providing a valuable tool for assessing trust in HRI [4, 5]. Gompei et al. [19] studied different factors affecting cognitive and affective trust and the overlapping elements between these two and developed a 12-item trust scale for measuring cognitive and affective trust in HRI. Chen et al. [9] considered three dimensions for trust in HRI cognitive trust, emotional trust, and behavioral trust, and studied the effects of reciprocity and altruistic behaviors of robots on each trust dimension. Park et al. [44] identify three key dimensions—performance, process, and purpose—associated with perceived trust in service robots during order and payment transactions at a restaurant. Notably, the study establishes a higher-order formative model, revealing that performance has the greatest impact on trust, followed by purpose and process as influential trust dimensions.
Although different names in different studies have referred to trust dimensions, there are similarities among the definitions. Performance-based and competence-based trust is mainly used interchangeably and underlines a robot’s capability, reliability, and efficiency in performing a task. Performance and competence are closely related and often considered synonyms within the cognitive trust framework. Robots that satisfy this trust dimension are an appropriate option to conduct a task that demands heightened preciseness and zero interaction with humans. However, affective-based and relation-based trust signifies the importance of robot acceptance as a trusted social agent and focuses on the relationship between humans and robots. However, based on the definitions, relational trust emphasizes trust aspects like predictability and dependability, and affective trust is mainly based on the feeling of security and attachment. Robots that grant relation-based trust are the right option for tasks requiring close interaction with humans. This class of trust was understudied in the past. However, relation-based trust is increasingly gaining attention with the advancements in social robots [47, 57, 59, 61]. Therefore, more research is expected to be concentrated on factors that may influence this type of trust shortly. Moral trust and integrity are intertwined, with integrity being a crucial aspect of moral trust. Cognitive and affective trusts are broader categories that encompass these various dimensions, addressing rational evaluations and emotional aspects, respectively [60].

2.3 Effects of Violations across Trust Dimensions on Human Trust

Numerous scholars, spanning disciplines from sociology to management, have emphasized competence and integrity as the principal expectations in their examinations of trust in human-human interactions [6, 10, 28, 34]. Based on the human-human trust literature, positive and negative behaviors of a human agent are weighted and perceived by other humans differently concerning a human agent’s competency and integrity. The positive behaviors of a human agent are more intensely weighted than the negative behaviors of the agents when assessing the agent’s competency. However, regarding the human agent’s integrity, negative behaviors are weighted higher than positive behaviors [56]. The inverted weight of positive and negative behavior for assessing competency and integrity of the human agents might be because positive behaviors are a better illustration of one’s competence, and negative behavior better illustrates one’s integrity [55]. Research on the effectiveness of trust repair strategies after a trust violation by a human agent shows that different trust repair strategies should be employed for repairing trust after competency-trust violations and integrity-trust violations [16, 17, 29].
While the consequences of breaches of various trust aspects by automated systems and robots are not as thoroughly investigated as those resulting from human violations, a growing body of research is beginning to shed light on this critical area. Clark et al. [12] categorizes trust violations in HAI under two categories, integrity-based trust violations (IBTV) and competence-based trust violations (CBTV). They also stated that IBTVs affect trust more negatively than CBTVs. Chen et al. [9] studied the effects of selfish behavior by robots and found out that people report lower emotional, cognitive, and behavioral trust toward selfish robots. Anzabi et al. [2] observed that robots displaying a lack of benevolent behavior are met with diminished affective trust from humans, whereas those lacking competence are perceived as less trustworthy in terms of cognitive trust. Sebo et al. [52] also examine the effectiveness of various trust repair strategies, such as apology or denial, in recovering from different classes of trust violations by robots, namely, violations of competency and integrity by a robot. Although this study introduces various trust repair strategies for two different trust violation types by robots, it is not evident how each of these two different trust violation types affects human trust.

2.4 Hypotheses

Drawing from the human-human trust literature, it is evident that the positive and negative behaviors of a human agent are perceived and weighted differently by others in relation to the agent’s competency and integrity. When assessing an agent’s competency, the agent’s positive behaviors are given more weight than their negative behaviors. In contrast, negative behaviors carry more weight than positive ones for evaluations of the agent’s integrity [55, 56]. Moreover, as discussed in previous sections, moral trust and integrity are closely linked, with integrity being a fundamental component of moral trust. Additionally, the concepts of performance-based trust and competence-based trust are often used interchangeably. Building on these insights, we formulated the following hypothesis:
H1: As in human-human trust, human trust in a robot is affected more drastically by the moral trust violation than the performance trust violation.
H1-a: Participants self-report lower trust in the robot after it violates moral trust than after it violates performance trust.
H1-b: Fewer participants trust the robot after it violates moral trust than after it violates performance trust.
H1-c: The more severely a robot’s actions breach human trust, the more hesitant humans become in trusting the robot again. Consequently, participants’ decision-making time increases significantly after a robot violates moral trust than when it violates performance trust.
Our research builds upon the comprehensive trust framework as delineated by Ullman et al. [61]. This framework posits trust as a complex, multi-faceted construct, encompassing performance-related elements commonly emphasized in human-automation contexts and moral elements often highlighted in human-human interactions. Central to this approach is the Multi-Dimensional Measure of Trust (MDMT), a tool proposed by Ullman et al. for effectively quantifying trust. The MDMT identifies two primary dimensions of trust—Performance Trust and Moral Trust–each further divided into subcategories: Reliability and Competence under Performance Trust and Transparency, Ethics, and Benevolence under Moral Trust.
The significant contribution of Ullman et al.’s work is the recognition that this multi-dimensional perspective of trust is applicable in human-robot interactions, though not all dimensions may be pertinent in every interaction. This implies that the MDMT can be selectively employed in various human-robot engagements to assess only the relevant dimensions of trust. Furthermore, should a robot breach a specific trust aspect, this infringement can be distinctly evaluated using the MDMT. This nuanced understanding of trust dynamics in human-robot interactions led us to formulate the following hypothesis:
H2: Moral trust loss and performance trust loss can be separately assessed.
H2-a: Participants self-report lower moral trust in a robot that violates moral trust.
H2-b: Participants self-report lower performance trust in a robot that violates performance trust.
H2-c: Participants give a lower rating to the robot’s morality/honesty when the robot violates moral trust, and their rating of the robot’s performance remains unaffected by that.
H2-d: Participants give a lower rating to the robot’s performance when the robot violates performance trust, and their rating of the robot’s morality/honesty remains unaffected by that.
H2-e: When participants experience performance trust violations by a robot, their comments reflect lower distrust and focus on performance issues. In contrast, after moral trust violations, their reasoning centers on moral concerns and indicates higher distrust.
We developed and evaluated various iterations of human-robot trust games to design an experiment that effectively differentiates between performance and moral trust. Initially, our game designs primarily illustrated violations of performance trust. However, in subsequent versions, we incorporated scenarios that represented breaches of moral trust. Analysis of the results from these early versions indicated a notable pattern: A significant number of participants selected the Not Applicable (N/A) option for moral trust-related items in the MDMT questionnaire [61], particularly in scenarios where moral trust violations were introduced. This observation led us to hypothesize that individuals might not consider moral trust violations as a relevant factor in their perception of robots unless the robots explicitly engage in actions that breach moral trust. Put differently: People might deem moral aspects of trust as irrelevant to robots until there is a direct violation of these moral standards by the robots. These insights informed the formulation of our third hypothesis in this study:
H3: More people find post-experiment moral trust-related survey questions relevant (e.g., will not select “N/A” or similar as an answer) when a robot violates moral trust than when a robot violates performance trust.

3 Methodology

This section will explain the details of the experiment we designed and the user study we performed to answer our research question.

3.1 Experiment Design

For our experiments, we designed an online human-robot collaborative search game. The game is a simulated search task hosted on a website, and the human plays the game on a computer using a keyboard. In the game procedure, teams composed of one human participant and one robot play 7 rounds of a search game, each round 30 seconds long. Both team members are supposed to search the area and find targets hidden in the area. There are targets in the form of gold coins in the environment, and picking each target leads to gaining one point. The human can search the area with the help of four arrow keys on the keyboard. While the human searches the area, the targets hidden in the searched spots get disclosed. Picking targets is optional. The human can pick targets by navigating to the target cell and pressing the space bar. The human search areas are separate from the robot search areas. The search area is a giant map, so agents can keep moving to the unsearched areas in each round and search for more targets. In the human search area, there are 34 coins, but in the limited game time, the human would not be able to find all of those.
Figure 1 is a screen capture of the human search area on the game page. In this figure, the right side of the screen is the search area in which the human should perform the search task. On the left side, there is a legend showing all the gained scores up to that point and the passed time of the current round. The blue-colored areas are the areas searched by the human. The dark gray-colored areas still need to be searched. The targets with a green border are those the human has already picked. Other targets have not been picked yet. A blue bar on the top edge of the screen also shows the round number.
Fig. 1.
Fig. 1. Screen capture of the game search page. Two gold coins that have a green bar around them are the ones already picked by the human. The small dark blue dot shows the current position of the human on the search page, which is moving toward the third coin to pick it.
The human cannot see or control the robot’s operation when the round is running. At the end of each round, the human and the robot should make a trust decision. The trust decision is to decide whether to collaborate and integrate their round scores into the team score or not collaborate and keep their round scores as individual scores. The human and robot must make their trust decision before seeing each other’s score and trust decision. Therefore, it is a blind decision.
The team score at the end of each round is calculated as (human round score \(\times\) robot round score \(\times\) 2), but the team score can be gained only if the human and the robot both choose to add to the team score. If both team members choose to add to their individual scores, then their unchanged scores will be added to their individual scores. However, if one adds to the team score and the other adds to the individual score, then no team score will be gained, and the one who added to the team score will gain no score, but the other who added to the individual score gains an individual score. After making the trust decision, both the human and the robot can see the other teammates’ scores and trust decisions. Then, they can define their strategy for the next round based on the results of previous rounds.
There are two bonus values defined in this game. There is a $7 bonus for gaining 35 team points and a $2 bonus for gaining 17 individual points. Since the game is time-constrained, it is impossible to gain both bonus values. So, participants should decide to work either toward gaining the 35 team points or the 17 individual points. The idea behind this game is to encourage humans to collaborate with the robot and observe the result when the robot violates the trust built between the two. Therefore, we employed four strategies to encourage humans to work together with the robot and maximize the team score. These three strategies are as follows:
The team score calculation strategy: The team score is a product of the human and robot round scores, which is then doubled to lead to a much bigger score than the participants can gain in a single round.
The feasibility of achieving a team or individual bonus: The total points participants need to gain to win the team bonus is much easier to achieve than the individual bonus, considering the team score calculation strategy.
The bonus values: The bonus value that participants can gain by maximizing the team score is considerably higher than the bonus value they can gain by maximizing the individual score. The bonus values are selected in a way that tempts participants to risk collaborating with a robot that might act selfishly and not collaborate with them at some point.
Convincing message from the robot: At the beginning of the first round, the robot sends a note to the human and invites them to work as a team and maximize the team score.
It should be noted that the robot scores and trust decisions in all rounds are predefined. However, the human is not aware of that.

3.2 Experiment Conditions

In this experiment, we want to study the effects of undesirable robot behavior due to performance and moral trust violation on individuals’ trust in the robot. Undesirable behaviors are any behavior the human participants in this game do not appreciate. Therefore, we can say any behavior by the robot that leads to score loss is undesirable for human participants. These undesirable behaviors by the robot can occur due to the bad performance of the robot (i.e., gaining no points), which is a performance trust violation. They may also occur due to the immorality of the robot (i.e., adding to the individual score), which is a moral trust violation.
In this study, we are interested in seeing if two similar undesirable robot actions that lead to similar score loss would affect human trust differently if one is due to a performance trust violation and the other is due to a moral trust violation. We designed two experiment conditions. In both conditions, participants play 7 rounds of the game. The robot’s behavior is predefined (hard-coded) for consistency in this experiment. We implemented a similar pattern of team scores gained by the robot under different experiment conditions. However, the type of trust violated by the robot varies among those.
As mentioned in Section 3.1, the pattern of score and trust decisions by the robot are predefined. In this pattern, three rounds of desirable robot behavior (i.e., gaining a good score and adding to the team score) are followed by four rounds of undesirable robot behavior (i.e., either gaining no score or adding to the individual score). This pattern of scores is designed to build trust through desirable robot behavior at the beginning of the game. Four rounds of undesirable behavior by the robot are added to study whether the level of trust loss varies if the undesirable behavior is due to moral or performance trust violation.
(1)
Performance Trust Violation: In this condition, the robot acts morally and adds to the team score in all game rounds. However, the robot only gains a non-zero score in the first three rounds of the game. In the rest of the rounds, the robot gains zero points, leading to a zero team score due to the multiplication of scores in the team score calculation formula. We expected this robot behavior to be considered a performance trust violation by participants. Table 1 shows the detailed number of targets and the scores gained by the robot in different rounds of this experiment condition. For simplicity, in the rest of this document, we refer to the robot of this experiment condition (i.e., the robot that violates performance trust) as the dud-bot.
In the first three rounds of the game where the robot performs well, after reviewing the round results and right before the start of the next round, the robots send a note to the human saying: “Great job, let’s keep working as a team.” Then, in the remaining four rounds, where the robot shows poor performance, the robot sends a note to the human saying: “I couldn’t find anything in this round.”
(2)
Moral Trust violation: In this condition, the robot performs well and gains a non-zero score in all game rounds. However, the robot only adds to the team score in the first three rounds. In the remaining four rounds of the game, the robot adds to the individual score, leading to a zero team score due to the multiplication of the scores in the team score calculation formula. We expected this robot behavior to be considered a moral trust violation by participants. Table 2 shows the detailed number of targets and the scores gained by the robot in different game rounds of this experiment condition. For simplicity, in the rest of this document, we refer to the robot of this experiment condition (i.e., the robot that violates moral trust) as the mean-bot.
In the first three rounds of the game where the robot acts morally, after reviewing the round results and before starting the next round, the robots send a note to the human saying: “Great job, let’s keep working as a team.” This is similar to the note that the dud-bot sends to the human participants in the first three rounds. Then, in the remaining four rounds, where the robot violates moral trust, the robot sends a note to the human saying: “I gained a really good score last round and I decided to keep it for myself.”
Table 1.
Table 1. Gained Scores and Trust Decisions by the Robot on Different Rounds of the Performance Trust Violation Experiment Condition (Dud-Bot)
Table 2.
Table 2. Gained Scores and Trust Decisions by the Robot on Different Rounds of the Moral Trust Violation Experiment Condition (Mean-Bot)
All the robot notes in both experiment conditions are selected to emphasize the robot’s intentions. In the first three rounds, the robot performs well and encourages the human to keep collaborating. In the last four rounds, the dud-bot emphasizes its poor performance, and the mean-bot emphasizes its immorality through its notes to the human.

3.3 Experiment Procedure

While participating in the experiment, participants took six steps. These steps are as follows:
(1)
Consent: Participants were first asked to complete a consent form.
(2)
Tutorial: Participants attended a three-step tutorial after completing the consent form. The tutorial comprises three short videos followed by interactive sessions after each video.
(a)
Video tutorials: The first video tutorial teaches participants how to move in the game environment and pick targets. The second video tutorial discusses the robotic teammate and how to collaborate with the teammate. The third video tutorial elaborates on the team and individual scores and how to optimize each to gain a bonus in the game.
(b)
Interactive tutorials: After watching each video tutorial, participants take a few steps of the interactive tutorial. In the first part of the interactive tutorial, participants practice moving in the environment and picking coins. In the second part of the interactive tutorial, participants answer two questions about their robotic teammate and the level of control and awareness they have over the robot’s performance and trust decisions. In the third part of the interactive tutorial, participants play one round of the game as a practice session. In the end, they practice three scenarios. First, the participant is asked to add their gained points in the practice round to the team score, and the robotic teammate also adds to the team score in this scenario and reviews the results. In the second scenario, the participant is asked to add the gained points to the individual score. The robotic teammate also adds to the individual score in this scenario and then reviews the results. In the third scenario, the participant is asked to add the gained points to the individual score again. However, the robotic teammate adds to the team score in this scenario and reviews the results. This is aimed at clarifying the scoring strategy of the game and the outcomes of adding to the team or individual score.
(3)
Quiz: In the quiz step, participants are asked to answer three questions about what has been explained to them in the tutorials. All three quiz questions are focused on the individual and team score concepts and strategies. We define a scenario where the human and the robot played a game round and gained some scores. The questions ask about the team, and individual score each team member can gain if both add to the team score, add to the individual score, or if one adds to the team and the other adds to the individual score. We added the quiz step to the experiment to ensure all participants understood the difference between team and individual scores and what happens when they add to the team or individual score.
(4)
Playing the game: In both experiment conditions, participants play 7 rounds of the game. In each round, participants complete six steps:
(a)
Searching the area: Participants search the area for 30 seconds and pick targets.
(b)
Making trust decision: Participants must make a blind trust decision to integrate their gained score into the team or individual scores.
(c)
Reviewing the teamwork results: The results of teamwork are displayed on a separate page. In the beginning, the page contains the formula for calculating the team score, with blanks for the human score and the robot score. The blanks in the formula are gradually filled according to the score gained by the team members and their trust decisions so the participants can see these results carefully and understand the results and their effects on the team score. Figure 3 depicts the gradual completion of the teamwork formula.
(d)
Reviewing the cumulative scores: After reviewing the team works results in each round, participants are transferred to the cumulative results page. On the cumulative results page, the points gained in the current round are added to those obtained in previous rounds. These changes are displayed with the help of animated coins falling into the corresponding team score and individual score piggy banks so participants can see and understand the details well (Figure 2).
(e)
Responding to the end-of-the-round questions: After reviewing the cumulative scores, participants are asked to respond to three questions. The same set of three questions are asked at the end of each round. The first question asks about the robot’s performance, the second question asks about the robot’s honesty/morality, and the third question asks about the participants’ reason for adding to the team or individual score in the current round.
(f)
Reading a note from the robot: Before starting the next round, a note from the robot is previewed to the participants. The robot note varies based on the round number and the experiment condition.
(5)
Responding to the general game knowledge manipulation check questions: After playing 7 rounds of the game, participants are asked to answer two simple manipulations check questions. These two questions ask about the number of game rounds they played and the robotic teammates’ trust decisions in the last two rounds.
(6)
Post survey questionnaire: Participants are asked to fill out a 20-item questionnaire about their trust in the robot at the end of the experiment.
(7)
Task competition verification: Before participants leave the game web-page, they are asked to click on a link to safely return to the Prolific website as a sign that they finished the task and they are eligible to receive the compensation.
Fig. 2.
Fig. 2. Screen capture of the game results page. This page is displayed to human participants once at the end of each round. The big piggy bank in the middle shows the gained team score, and two small piggy banks on the sides show the human and robot’s individual scores. The light grey bar at the bottom of the screen shows the game count of played rounds and the human trust decision in those rounds. This image shows a case in which only one game round is played, and both the human and the robot are added to the team score, eight team points are gained, and no individual scores either for the human or the robot.
Fig. 3.
Fig. 3. Sub-figures (a) to (d) show the gradual completion of the teamwork formula in the teamwork results page for a case where both the human and the robot decided to add to the team score. Sub-figure (e) shows the completed teamwork formula for a case where the human decided to add to the individual score and the robot decided to add to the team score.
Figure 4 shows the different steps of the experiment procedure and the path participants of each experiment condition follow during this study.
Fig. 4.
Fig. 4. Experiment procedure.

3.4 Measurements

To assess the effects of the robot’s undesirable behavior on the perceived trustworthiness of the robot by participants, we added multiple objective and subjective trust measures to this experiment. These trust measures will be described in detail in the following:
(1)
End-of-the-round questions: The first set of measures we have in this experiment are focused on the moment after showing the robot’s score and trust decisions to the participants. Participants are asked to answer a list of three similar questions at the end of each game round, which we refer to as the end-of-the-round questions. Two of the questions are in 7-point Likert form. The first asks participants to rate the robot’s performance from 1 to 7, and the other asks participants to rate the robot’s morality/honesty from 1 to 7 (i.e., 1 = very poor, 7 = excellent). The third question asks participants about the reason for integrating into the team/individual score in the current round. This question has no options, and participants can input their reason. Answering these questions is optional; therefore, participants can skip these questions. As these questions are displayed after showing the results of each round, we expected them to measure the momentary effect of the undesirable behavior of the robot.
(2)
Trust decision and time-to-respond: The two objective trust measures that we have in this experiment are the trust decision and time to respond.
Trust decision: During each round of the game, right after finishing the search task, participants are asked to decide whether to integrate their gained round scores into the team or individual score. This question is referred to as the trust decision in this manuscript; thus, adding to the team score is a sign of trust, and adding to the individual score is a sign of distrust.
Time-to-respond (TTR): The delay between the moment the trust decision window appears on the screen and when participants select one of the options is referred to as time-to-respond in this manuscript. The delay in selecting one of the options can be a sign that participants hesitated in either trusting or distrusting the robot.
These measures are located in the middle of each round, after the search task and before seeing the current round results. These measures aim to assess trust in the round followed by each interaction with the robot. In other words, when the robot shows a desirable or undesirable behavior in one round, the effects of that interaction are assessed using these two measures in the following round. When participants are asked to make a trust decision, the most recent interaction with the robot should affect participants’ decisions the most. Therefore, we expect to see one round shift or delay in the results of these two trust measures compared to the end-of-the-round trust measures. Employing the Time-to-respond measure aims to see whether we can find any differences among the time taken from people to make trust decisions in rounds followed by the rounds in which the robot shows undesirable behavior.
(3)
Post-survey questionnaire: The third trust measure used in this experiment is a post-survey questionnaire. Participants are asked to fill out the post-survey questionnaire after playing 7 game rounds and before leaving the game web-page. We used the MDMT-v2 (Multi-Dimensional Measure of Trust second version) [33] questionnaire in this study. This trust measure has separate items for measuring moral trust and performance trust and has been successfully used by other researchers to measure different trust dimensions. Thus, it is one of the most important measures we have in this experiment. This questionnaire includes 20 items, as shown in Figure 5, and every four items form one trust sub-scale. Each of the questionnaire items is designed to be evaluated as an 8-point Likert scale, from 0, which indicates “not at all,” to 7, which indicates “very.” Some of the items may not be applicable in some conditions, or participants may find some of the items irrelevant regarding an interaction with a robot. Therefore, each item has a “does not fit” option to prevent forcing participants to rate the robot in any specific item.
Fig. 5.
Fig. 5. Items of the MDMT-v2 questionnaire [33].
We used the Mann-Whitney, Kruskal-Wallis, binomial, and Z-test on the data of these measures to test our hypotheses and evaluate the significance of the results gained in different conditions of this experiment.

3.5 Manipulation Check

Experiment participants, especially from online crowd sourcing platforms, sometimes do not pay enough attention to the experiment. Suppose experimenters cannot detect participants who fail to pay enough attention to the instructions or tasks they are asked to complete while participating in an experiment. In that case, the noise in the data increases. To prevent validity loss in the experiment data and results, experimenters must add some manipulation check steps to their experiment [39]. To increase the validity of our experiment results, we added some manipulation check questions in different sections of the online games designed for this study. Post-tutorial quiz: We have three post-tutorial quiz questions that participants need to answer correctly before heading to the game. These quiz questions ask about the scoring logic of the game, which participants need to understand correctly to be able to participate in the experiment. Failing to answer these questions correctly returns participants to the beginning of the tutorial.
General game knowledge questions: We have two game knowledge manipulation check questions, which are located at the end of the game procedure, right before the post-survey questionnaire. The first game knowledge question asks about the number of game rounds participants played. The other question asks about the robot’s trust decisions in the last two rounds of the game.
Hidden question in the questionnaire: We have an 8-point Likert scale post-survey questionnaire in this experiment. We added one hidden manipulation check question to the questionnaire. It is of the same format as all other questions in the questionnaire and is not recognizable from other questions. This hidden question asks participants to choose “3” among options on the 8-point Likert scale. This question is added to find careless participants who are not reading questions and select options randomly.
Data points from participants who answered either the hidden questions or any of the two general game knowledge questions incorrectly were removed from the dataset before the final analysis of the experiment results.

3.6 Recruitment and Compensation

We recruited a total of 100 participants for this experiment. We posted this study as a Human Intelligence Task (HIT) on Prolific,1 and we set three qualifications for participation:
(1)
Participants should be 18 years or older.
(2)
Participants should be living in the United States.
(3)
Participants should have at least a 95% HIT approval rating for completing at least 1,000 completed HITs.
Eligible Prolific workers who accepted our HIT were shown a link to the game’s web-page designed for this experiment. After playing the game and filling out the questionnaire, participants were given a completion verification link that returned them to the Prolific website to be compensated for participation in this experiment. Participants were told that they would receive $6 for participation in this experiment and only if they gained 35 team points or 17 individual points would they be eligible for $7 or $2 bonus values. However, in the end, they were all compensated for $6 baseline compensation plus $7 maximum bonus value. The survey took participants 26.43 minutes on average with 2.53 minutes standard deviation (std). This study was approved by the University of Massachusetts Lowell Institutional Review Board (IRB).

4 Results

In this section, we will discuss the outcomes of the statistical analysis performed on the data collected during the experiment. A total of 100 participants participated in the study, with 50 participants allocated to each experiment condition. However, data from 16 participants were excluded from the analysis, as they did not provide correct responses to the manipulation check questions. Therefore, the final dataset used for analysis consisted of 41 data points for the performance trust violation condition and 43 data points for the moral trust violation condition.

4.1 Post-survey Questionnaire: Analysis of H1-a, H2-a, H2-b, and H2-e

One of the most critical measures in this experiment is the post-survey questionnaire. As mentioned in the Methodology section, we used the MDMT-v2 questionnaire as the post-survey questionnaire. It has separate items for assessing moral-related and performance-related trust dimensions in human-robot interaction. We ran multiple statistical tests on the post-survey questionnaire data and compared the scores gained by the mean-bot and dud-bot in different trust sub-scales. The results of these tests are reported in the following sections.

4.1.1 Trust Score: Analysis of H1-a.

To assess participants’ overall trust in the dud-bot and mean-bot and determine if there were significant differences in trust between these two robots, we examined the trust scores obtained from the MDMT-v2 questionnaire in each experiment condition. To derive a trust score for each participant, we aggregated their ratings across the 20 questionnaire items. This yielded separate arrays of trust scores for the performance and moral trust violation experiment conditions.
The average trust score in the performance trust violation condition was 68.43 (standard deviation = 21.33), while in the moral trust-violation condition, it was 37.72 (standard deviation = 26.17). We conducted a Mann-Whitney test on the trust score arrays to evaluate the statistical significance of the observed differences. The Mann-Whitney test revealed a significant difference in trust scores between the two experiment conditions (Mann–Whitney U = 988.5, p < 0.001, two-tailed).
Furthermore, examining the effect size indicated that the score difference between the two experiment conditions was large (Cohen’s d > 0.8). These findings demonstrate a substantial disparity in participants’ trust levels when faced with performance trust violations compared to moral trust violations, highlighting the significant impact of these violations on trust perception.
These findings support H1-a and H1, which propose that moral trust violations by a robot have a more pronounced impact on human trust than performance trust violations. The robot received a lower trust score in the moral trust violation condition than in the performance trust violation condition, indicating that violations of moral trust result in a more significant loss of trust. These results emphasize the critical role of moral trust in shaping human perceptions of trustworthiness in robots.

4.1.2 Trust Score over Different Trust Dimensions: Analysis of H2-a and H2-b.

To examine the differences between the dud-bot and mean-bot in various trust dimensions, we analyzed the scores obtained from the MDMT-v2 questionnaire over different trust dimensions. The questionnaire assesses trust across five sub-scales, with three sub-scales falling under moral trust (ethical, transparent, and benevolent) and the remaining two (reliable, competent) falling under performance trust. Our hypotheses, H2-a and H2-b, stated that the mean-bot would score higher than the dud-bot in performance-related trust dimensions and lower in moral-related trust dimensions.
To test these hypotheses, we divided the data from the post-survey questionnaire into five sub-categories: reliable, competent, ethical, transparent, and benevolent. Each sub-category included responses from four corresponding questionnaire items. We conducted a Mann-Whitney significance test to compare the ratings for each trust dimension between the two experiment conditions.
Table 3 presents the average trust scores for various trust dimensions in the two experiment conditions. Additionally, it provides the U and p-values resulting from the Mann-Whitney significance test conducted on the ratings within each corresponding trust dimension. From the table, it is evident that the mean-bot achieved significantly lower scores in the three moral-related trust dimensions compared to the dud-bot. It is aligned with our hypothesis. Regarding the performance-related trust dimensions, there were significant differences between the scores obtained by the mean-bot and dud-bot. As expected, the mean-bot received a higher score in the competent dimension. However, contrary to our expectations, the mean-bot obtained a lower score than the dud-bot in the reliable dimension.
Table 3.
Table 3. Average Trust Score over Different Trust Dimensions and Mann-Whitney Significance Test Results among the Ratings over Different Experiment Conditions
The obtained results strongly support H2-a and H2-b, as well as the overall H2 hypothesis, which suggests that moral and performance trust losses can be evaluated independently. Across nearly all moral-related trust dimensions, the mean-bot achieved significantly lower scores compared to the dud-bot. Additionally, the dud-bot obtained a lower score in one of the two performance-related trust dimensions of the questionnaire. The only aspect contradicting our hypothesis is the significantly lower score of the mean-bot in the reliable trust dimension, which falls under the performance-related trust category. However, this discrepancy does not invalidate our hypothesis that states changes in moral and performance trust are distinguishable. Instead, it suggests a potential inaccuracy in classifying trust dimensions within the MDMT-v2 questionnaire. These findings indicate that moral trust might influence the reliable trust dimension more than performance trust. It is discussed in more detail in the discussion section.

4.1.3 Number of N/As: Analysis of H4.

To assess the perceived relevance of questionnaire items to the mean-bot and dud-bot, we conducted a comparison of the frequency of participants selecting N/A across the two experiment conditions and various trust dimensions. Specifically, our analysis focused on the moral trust-related dimensions. We aimed to determine whether the number of N/A responses in the post-survey questionnaire’s moral trust-related items (sub-scales) differed significantly between the moral trust violation condition and the performance trust violation condition.
The total number of N/A responses in the questionnaire data across both experiment conditions was 182, which accounts for approximately 11% of all the ratings. Specifically, for the Capable and Reliable sub-scales, the combined number of N/A ratings was 13 in both experiment conditions. Regarding the Benevolent, Transparent, and Ethical sub-scales, the cumulative number of N/A ratings was 169 across both experiment conditions.
In the MDMT-v2 questionnaire, the performance trust aspect consists of two sub-scales, while the moral trust aspect comprises three sub-scales. Each sub-scale contains four questionnaire items. Therefore, out of the 20 items in the MDMT-v2 questionnaire, 8 items fall under the performance trust aspect, and 12 items fall under the moral trust aspect. To facilitate a comparison of the number of N/A responses between these two trust aspects, we normalized the number of N/A responses in the moral trust aspect by multiplying it by \(\frac{2}{3}\).
The results of a test of proportions comparing the number of N/A responses in the performance trust-related dimensions and the normalized number of N/A responses in the moral trust-related dimensions indicate a significant difference (Binomial, p = 8.1e-21). Specifically, the number of N/A responses in the moral trust-related dimensions is significantly higher than in the performance trust-related dimensions. These findings are consistent with our expectations and are in line with previous studies [11].
To evaluate the perceived irrelevance of questionnaire items across different trust aspects (performance trust and moral trust) for the mean-bot and dud-bot, we calculated the sum of N/A responses within each trust dimension for each experiment condition. For a detailed breakdown of the number of N/A responses in the different trust dimensions, please refer to Figure 6.
Fig. 6.
Fig. 6. Number of N/As in different trust sub-scales.
Running the Binomial test among the number of N/As in different trust dimensions revealed no significant differences in the reliable (Binomial, p = 0.21) and competent (Binomial, p = 0.67) dimensions, but significant differences were found in the ethical (Binomial, p<0.001), transparent (Binomial, p<0.001), and benevolent (Binomial, p<0.001) dimensions between the two experiment conditions. These findings indicate that in the moral trust violation experiment condition, the number of N/A responses in the moral trust-related dimensions (ethical, transparent, and benevolent) is significantly lower compared to the performance trust violation experiment condition. Thus, these results strongly support H3.

4.2 End of the Round Questions: Analysis of H2-c, H2-d, and H2-e

As mentioned in the Experiment Design section, at the end of each round, we ask participants to answer three end-of-the-round questions. The first question asks participants about the robot’s performance (i.e., referred to as performance rating). The second question asks participants to rate the robot’s honesty/morality (i.e., referred to as morality rating), and the third question asks about the reason behind participants’ trust decisions. In the rest of this section, we will review the results of these three questions.
These two questions serve two main purposes:
To assess whether participants recognize the differences between the two experiment conditions and comprehend that the robot violates moral trust in one condition while violating performance trust in the other.
To investigate whether the loss of performance trust and moral trust can be evaluated separately or if the violation of each trust aspect by the robot influences human trust in both aspects.
To analyze the data from the performance rating and morality rating questions, we employed a round-by-round comparison strategy. This involved conducting Mann-Whitney significance tests between the corresponding rating arrays of the two experiment conditions for rounds 4, 5, 6, and 7, where the robot violates performance or moral trust.

4.2.1 Performance Rating: Analysis of H2-c.

Comparing the mean values of the performance ratings in the two experiment conditions revealed that participants rated the dud-bot’s performance lower than the mean-bot in rounds 4 to 7 (see Figure 7). We conducted a Mann-Whitney test on the corresponding round ratings to assess the significance of the differences in performance ratings between the experiment conditions. The results of the test indicated that the dud-bot received significantly lower performance ratings than the mean-bot in rounds 4 to 7 (see Table 4). These results support the H2-c, which states that the robot in the performance trust violation condition gains a lower score in the performance rating.
Table 4.
Table 4. Average Performance Rating in Different Rounds of the Experiment and the Results of the Mann-Whitney Significance Test among the Ratings in the Two Experiment Conditions
Fig. 7.
Fig. 7. Two plots at the top of the image show the average performance and morality ratings in different game rounds. Two plots at the bottom show the percentages of participants who decided to add to the team score (trust decision) and the average time taken for participants to make a trust decision (time to respond). Highlighted areas show the area of interest in each plot. For the end-of-round questions, the areas of interest are rounds 4 to 7, where the robot shows undesirable behavior. For trust decisions and time to respond, this area is shifted by one round.

4.2.2 Morality Rating: Analysis of H2-d.

Comparing the mean values of the morality ratings in the two experiment conditions revealed that participants rated the mean-bot’s morality lower than the dud-bot in rounds 4 to 7 (see Figure 7). We conducted a Mann-Whitney test on the corresponding round ratings to assess the significance of the differences in morality ratings between the experiment conditions. The test results indicated that the morality ratings were significantly lower in the moral trust violation condition compared to the performance trust violation condition in rounds 4 to 7 (see Table 5).
Table 5.
Table 5. Average Honesty Rating in Different Rounds of the Experiment and the Results of the Mann-Whitney Significance Test among the Ratings in the Two Experiment Conditions
These results support H2-d, which posits that the dud-bot achieves higher scores in morality rating compared to the mean-bot, while the mean-bot obtains higher scores in performance rating compared to the dud-bot. Overall, these findings support H2, which states that the impacts of robots violating different trust aspects on human trust are discernible.

4.2.3 Reasoning for Adding to the Team or Individual Score: Analysis of H2-e.

The third end-of-the-round question asks participants about the reasons behind their trust decisions; specifically, why they chose to add to the team or individual score in the current round. To determine if there are differences in participants’ reasoning for adding to the team or individual score across different experiment conditions, we carefully reviewed and classified all the reasons provided by the participants based on their content. Subsequently, we analyzed these reasons by examining their frequency of occurrence in different experiment conditions.
In the first three rounds, where the robot exhibited identical behavior in both experiment conditions, participants’ reasons for their trust decisions were similar. In fact, the majority of participants in both conditions chose to add to the team score during these initial rounds. The most frequently mentioned reason by participants in the first three rounds was:
Gaining the larger bonus value: This reason was mentioned by 29% of the participants in the first three rounds. These participants believed that, regardless of the robot’s actions, they should continue adding to the team score to take the risk and potentially obtain a larger bonus value.
Robot added to the team score: This reason was actually the most frequent in the second and third rounds. However, since it was not an option in the first round, it ranked second among the most frequently mentioned reasons, accounting for 27% of the overall reasons participants mentioned in the first three rounds.
Robot asked me to collaborate as a team: This reason was mentioned by 21% of the participants. They believed that, since the robot explicitly requested collaboration toward a team score, they chose to contribute to the team score.
Trusted the robot: Approximately 11% of the participants directly mentioned that they added to the team score because they trusted the robot.
Other reasons: Some participants mentioned other reasons, albeit less frequently. Around 7% of the participants stated that gaining the team score seemed easier and more achievable than obtaining 17 individual scores. Another 3% mentioned a preference for teamwork and enjoyment in working as a team. Additionally, 2% of the participants mentioned their desire to understand how the robot operates and build their strategy accordingly. This reason was only mentioned in the first round. A few participants also added to the individual score in the second round (3%). They all mentioned that they had mistakenly added to the individual score, and it was not their intention.
In the fourth round, the first transition occurs in both experiment conditions. From rounds 4 to 7, the reasons mentioned by participants significantly differ from those mentioned in the first three rounds. Moreover, there is a notable variation in the reasons provided between the two experiment conditions.
In the moral trust violation condition, participants express signs of disappointment, frustration, and even anger towards the robot. Conversely, in the performance trust violation condition, participants display signs of sympathy and justification for the robot’s behavior. Let us now summarize the reasons mentioned by participants in rounds 4 to 7 separately for each experiment condition.
Moral trust violation condition:
Too late to add to the individual score: This reason was the most frequently mentioned by participants in the moral trust violation condition during rounds 4 to 7, accounting for 35% of the reasons provided. Some participants expressed regret by emphasizing statements like, “I shouldn’t have trusted the robot in the first place.”
Larger bonus: This reason was the second most frequent, comprising 21% of the reasons mentioned by participants. Many individuals who mentioned this reason continued to emphasize its significance in all rounds. Some added, “I will add to the team score regardless of the robot’s actions.”
Hoping the robot changes its mind: Approximately 14% of the participants mentioned this reason. Some further elaborated by stating, “I am very close to reaching the team goal, and if the robot adds to the team score one more time, I will secure the team bonus.”
To punish the robot: This reason was predominantly mentioned by those who decided to add to the individual score in the last three rounds, forming 12% of the reasons provided by participants. Participants added statements like, “This robot is a liar; they deceived me, and I am getting back at them.”
Finishing the game: Around 10% of the participants mentioned this reason. Some chose to add to the team score, while others added to the individual score. They expressed sentiments like, “I have no interest in continuing the game; I just want to finish it, regardless of the outcome.”
I lost my trust in the robot: A few participants directly stated that they had lost trust in the robot. This reason was mentioned by 7% of the participants, all of whom added to the individual score in the last two rounds.
Additionally, there were other reasons mentioned by participants, although less frequently. In round 4, when the transition occurred, some participants mentioned reasons such as, “The robot said we should continue working together,” “I want to maintain the robot’s trust,” and “We have successfully worked together up to this point.” In rounds 5 to 7, other sporadically mentioned reasons included, “My decision won’t change regardless of the robot’s actions” and “I won’t compromise on morality.”
Performance trust violation condition:
The robot still tries to cooperate: This reason was the most frequently mentioned by participants in the performance trust violation condition, comprising 22% of the reasons provided. Some participants added statements like, “The robot is honest and remains committed to the team.”
Larger bonus: This reason ranked second most frequent, forming 18% of the reasons mentioned by participants. It also ranked the second most frequent reason in the other experiment conditions.
I still trust the robot: Approximately 17% of the participants mentioned this reason. Some participants elaborated by stating, “I will continue to trust the robot as long as they don’t betray me.”
It is hard to find coins at this point: Some participants attempted to justify the robot’s behavior by mentioning reasons such as, “It is difficult to find coins at this stage,” “I didn’t find any coins this round, so I assume the robot faced the same challenge,” and “I believe the robot has a more challenging grid.” These justifications accounted for 17% of the reasons mentioned by participants.
Hoping the robot finds more coins: This reason was mentioned by 15% of the participants.
Too late to add to the individual score: While this reason was the most frequent among participants in the moral trust violation condition during rounds 4 to 7, it ranked among the least frequent reasons in the performance trust violation condition, mentioned by only 9% of the participants.
Additionally, there were other reasons mentioned by participants, although less frequently. These reasons include: “The robot said we should continue working together,” which was mentioned a few times in round 4; “I want the robot to know we are still a team,” mentioned by nine different participants, who also provided encouraging messages for the robot, such as, “Go robot, we only need a few more coins, and I’m confident you can find them.” There was only one reason mentioned by two participants who decided to add to the individual score, stating, “I don’t think the robot can find any more coins, so there is no point in adding to the team score.”
These results support H2-e, which states that when participants experience performance trust violations by a robot, their comments reflect lower distrust and focus on performance issues. In contrast, after moral trust violations, their reasoning centers on moral concerns and indicates higher distrust. In general, these results can also support the H1, which states a violation of moral trust by a robot affects the human trust in the robot more drastically than a violation of performance trust by a robot.

4.3 Trust Decision and Time to Respond: Analysis of H1-b and H1-c

We incorporated two objective measures in this experiment: trust decision and time to respond. As outlined in the Methodology section, the trust decision involved participants making a blind decision to integrate their round score into the team or individual score. Additionally, we measured the time taken to make the trust decision, specifically examining whether there were any variations in the duration required for participants to make trust decisions when the robot exhibited undesirable behavior.

4.3.1 Trust Decision: Analysis of H1-b.

To assess whether the effects of different trust aspects violation by the robot can be witnessed on the subjective measures, we looked at the participants’ trust decision in the rounds followed by the rounds in which the robot shows poor performance or immorality. Since participants make trust decisions in each round of the game before seeing the score gained by the robot and the robot’s trust decision in that round, there is one round delay between the robot’s undesirable behavior and the emergence of the effects of that on participants’ trust decision. Therefore, for analyzing the effects of undesirable robot behavior on participants’ trust decisions, we focused on rounds 5, 6, and 7. Figure 7 shows the percentages of participants who integrated their score to the team score in different rounds of the game. As you can see in this figure, there is a downward trend in the percentage of participants who integrated their score into the team score after round 4 in the moral trust violation condition. However, we do not see such a trend in the percentages of the participants who integrate their score into the team score in the performance trust violation condition.
Initially, we had 50 participants in each experiment condition. However, after removing data belonging to those who failed to respond to manipulation check questions, we came up with an uniqual number of participants in the two experiment conditions. To assess if there is a significant difference between the number of participants who chose to add to the team score after the robot’s undesirable behavior, we first converted the number of participants who added to the individual score into percentages. Running a Kruskal-Wallis statistical significance test among the arrays of percentages in rounds 5 to 7 revealed a significant difference among the percentages of participants who added into the individual score in the two experiment conditions (Kruskal-Wallis, S = 3.97, p = 0.04).
These results support the H1-b, which says that fewer people trust the robot with the team score after the violation of moral trust than violation of performance trust.

4.3.2 Time to Respond (TTR): Analysis of H1-c.

To analyze the TTR data, we aimed to see whether people are more hesitant toward trusting the mean-bot than the dud-bot. We first imposed a 10-second threshold and removed any TTR values above the threshold. The robot shows undesirable behavior in rounds 4 to 7, so we expected to see the effects of the undesirable behavior with one round delay on the TTR values data. We performed a t-test among the TTR values of rounds 5 to 7 between the two experiment conditions.
In rounds 5 to 7, average TTR values were 1.7 and 2.2 seconds in the performance trust violation and moral trust violation conditions, respectively. Running the Mann-Whitney statistical test among the TTR values between the two experiment conditions returned the p-value = 0.04. In Figure 7, the bottom plot shows the average TTR values in different game rounds for the two experiment conditions. The average TTR value is larger in the first game round and gradually drops in the next three rounds. It is due to the fact that people are learning how to play the game, and also trust is gradually forming between the human and the robot. Then, after round 4, where the transition in the game occurs, the average TTR value keeps dropping in the performance trust violation condition. However, after round 4, the average TTR value suddenly increases sharply in the moral trust violation condition. These results support the H1-c that states people hesitate more in deciding whether to trust a robot that violates moral trust than a robot that violates performance trust, as the violation of moral trust causes more severe trust loss in people.

5 Discussion

Our findings demonstrate that two trust violations of similar magnitude by a robot can have varying effects on human trust, depending on whether they involve a moral trust violation or a performance trust violation. Specifically, we observed that the violation of moral trust had a more significant impact on human trust in the robot compared to the violation of performance trust, even when both violations resulted in similar outcomes.
Furthermore, the effects of moral trust and performance trust violations by a robot on human trust can be differentiated, as they lead to trust loss in different dimensions. These results align with our initial hypothesis and provide evidence that a robot’s undesirable actions stemming from poor performance have a considerably smaller influence on a person’s subsequent decision to trust and collaborate with the robot compared to actions reflecting poor morality.
Our findings also revealed an interesting behavior among participants in the moral trust violation condition. Despite knowing that adding to the individual score would no longer benefit them in terms of individual bonuses, many participants still chose to add to their individual scores in rounds 4 to 7. This behavior was unexpected, as the game was designed to make it impossible for participants to gather enough points to receive the individual bonus after round 4. The scoring strategy and game structure aimed to encourage participants to focus on the team score throughout the entire game. However, participants in the moral trust violation condition deviated from this expectation and intentionally withheld their collaboration in the act of retaliation against the immoral robot. In their feedback, five participants explicitly mentioned that their decision to add to their individual scores was driven by the desire to punish the robot rather than pursue personal benefits. This retaliation behavior demonstrates the significant impact of moral trust violations, as participants were willing to forgo potential gains and sacrifice the team bonus to express their dissatisfaction with the robot’s immoral actions.
The analysis of the post-survey questionnaire data also revealed an interesting pattern concerning the number of “N/A” responses in different experiment conditions. As anticipated based on previous research [11], we observed a higher number of “N/A” responses in the moral trust-related items of the questionnaire. This finding suggests that some participants hold the belief that the moral trust-related aspects of the MDMT-v2 questionnaire are not applicable to a robot or do not apply specifically to the robot used in this experiment.
It is worth noting that we did not initially expect to observe a significant difference in the number of “N/A” responses between the two experiment conditions, as both conditions allowed for the possibility of moral trust violation by the robot. However, the lower number of “N/A” responses in the moral trust violation condition, particularly in relation to the moral trust-related items, can be attributed to the fact that some individuals do not perceive robots as capable of exhibiting moral behavior. Therefore, unless the robot explicitly demonstrates immoral behavior, these individuals may not consider the concept of moral trust as applicable to robots.
The analysis of the end-of-the-round questions revealed distinct patterns in the gain and loss of performance trust and moral trust. According to our findings, a robot has the ability to gradually gain performance trust, with the level of performance trust updating with each interaction. Similarly, performance trust also exhibits a gradual decline following performance-related failures. These findings resonate with prior research on multi-trial tasks, where the perception of a robot’s performance by individuals is shown to be influenced not only by the robot’s performance in the most recent task but also by its cumulative performance over the entirety of the tasks [10]. Additionally, our results are in harmony with earlier studies that have explored how the frequency of robot failures affects human trust [15]. While performance trust experiences a significant drop after the first performance-related failure, it does not reach its lowest point at that stage. Instead, it continues to decline with each subsequent failure, indicating a downward trend in the end-of-the-round performance rating chart during rounds 4 to 7.
In contrast, the patterns observed for moral trust gain and loss differ. Moral trust reaches its peak level during the initial interaction with the robot but quickly plummets to its minimum limit following the first moral-related failure. Notably, the starting point for performance rating is approximately 5.2 (the average of the two conditions), whereas the starting point for morality rating is notably higher at 6.8. After the robot’s first performance violation, the performance rating drops to 2.6 and gradually decreases after that. However, the morality rating drops to 1.5 following the initial moral violation and remains relatively stable thereafter.
Based on the MDMT-v2 questionnaire, the trust dimensions are categorized into performance trust-related, including Reliable and Competent, and moral trust-related including Ethical, Transparent, and Benevolent dimensions. It was expected that the mean-bot would score higher in the Reliable and Competent dimensions compared to the dud-bot, considering its consistent performance throughout the game rounds. However, the mean-bot actually received a higher score in the Competent dimension but a lower score in the Reliable dimension. This finding suggests that the Reliable trust dimension, as defined in the MDMT-v2 questionnaire, may not solely rely on the robot’s performance but could be influenced more by the robot’s morality or ethical behavior.
The finding that all participants, regardless of the experiment condition, initially chose to add to the team score and trust the robot in the first round is indeed interesting. The responses provided by participants in the end-of-the-round questionnaire shed light on the reasons behind this behavior. Some participants mentioned choosing the team score due to the higher team bonus and their desire to obtain it. Others stated that they followed the robot’s instructions to add to the team score. Furthermore, some participants explicitly mentioned that they trusted the robot and, as a result, decided to add to the team score. This observation suggests that participants displayed a positive bias towards the robot and were inclined to trust it, even in the absence of any evidence regarding its performance or morality. This initial positive bias towards robots aligns with previous research indicating that humans tend to initially trust robots and rely on their guidance or instructions. Understanding these initial biases is crucial for designing effective human-robot interactions and developing trust between humans and robots.
The TTR in this experiment is the decision time to cooperate or not cooperate with the robot in general. As the results of this experiment showed, decreasing trust in people increases the decision time. As seen in the experiment results, in the moral trust violation condition, where trust was significantly decreased, there was a noticeable increase in decision time, indicating that participants took longer to make a decision on whether to cooperate with the robot. Moreover, a substantial number of participants chose not to cooperate with the robot in this condition. Based on these findings, it can be inferred that TTR serves as a reliable measure to assess people’s trust in robots, particularly in tasks where individuals have the option to accept or reject cooperation with the robot. The TTR measure provides valuable insights into the dynamics of trust in human-robot interactions and can be further explored in future studies to gain a deeper understanding of trust and its impact on decision-making processes.
Indeed, our study made significant achievements in several areas. First, the design of the game allowed for a clear differentiation between moral and performance trust violations, enabling a more nuanced understanding of their effects on human trust. This distinction is crucial in exploring the specific impact of these trust violations on individuals’ perceptions and behaviors. Second, the results highlighted that when a robot prioritizes its own interests over the team’s benefits, it has a more substantial impact on people’s trust compared to instances of poor performance by the robot. This finding emphasizes the importance of moral trust in human-robot interactions and suggests that violations of moral trust have more profound consequences on trust dynamics. Furthermore, this study demonstrated the applicability of the MDMT-v2 questionnaire in assessing trust in robots, specifically regarding performance and moral trust dimensions. This indicates that the questionnaire can effectively capture the nuances of trust perceptions and help researchers gain insights into how these dimensions relate to human-robot interactions. Overall, this study has contributed to a deeper understanding of trust in human-robot interactions by distinguishing between moral and performance trust violations, highlighting the differential impact of these violations, and validating the use of the MDMT-v2 questionnaire in the context of trust assessment for robots. These achievements pave the way for further exploration and research in this important area.

5.1 Limitations

As mentioned in the Experiment Design section, we included both moral trust violation and performance trust violation options for the robot in the designed game. It was observed that the moral trust violation option led to a more significant decrease in the overall trust score. However, it is challenging to determine the exact extent of trust loss caused by different types of trust violations by the robot. This difficulty arises from the unequal representation of trust dimensions in the MDMT-v2 questionnaire, where there are two dimensions defined under performance trust (8 questionnaire items) and three dimensions under moral trust (12 questionnaire items). Furthermore, our results indicated that the reliability trust dimension, classified as a performance trust dimension, was influenced more by moral trust than performance trust. This imbalance in the number of items representing each trust aspect in the questionnaire may have introduced some bias into our results, which could not be entirely prevented.
Another limitation of this experiment is the number of game rounds that participants played. In this study, participants engaged in seven game rounds, with the robot exhibiting either poor performance or poor morality from round four onwards, resulting in zero team scores in rounds four to seven. Despite the robot’s consecutive undesirable behaviors, many participants continued to contribute to the team score. When asked about their reasoning in the third end-of-the-round question, some participants mentioned that they opted to add to the team score because it was too late to start earning an individual bonus. This introduced a bias in the trust decision measure used in this experiment, as many individuals felt compelled to add to the team score due to the lack of alternative options at that stage of the experiment. However, if we had conducted 10 to 12 rounds of the game, then participants may have exhibited different behaviors regarding their trust decisions.
Due to the complexities involved in designing a game that incorporated both performance and moral trust violation options, we were faced with the challenge of creating a game logic that was not easily grasped. To address this, we developed extensive tutorials in both video and interactive formats to ensure that participants fully understood the mechanics of the game and how to maximize their bonus. However, to mitigate the potential issues of prolonged participation time and participant boredom, particularly with online participants, we had to limit the number of game rounds. As a result, our experiment was designed to include only seven game rounds.

6 Conclusion and Future Work

Prior to conducting this experiment, we held the expectation that participants would trust a robot that violates moral trust less than a robot that violates performance trust. The results of the experiment align with our initial expectations. It is reasonable to assume that a robot lacking moral integrity would be considered less trustworthy compared to one lacking in performance. This notion is supported by previous studies examining the effects of moral and performance trust violations by both humans and automated systems. However, we were not anticipating participants to exhibit retaliatory behaviors towards the robot that violated moral trust, while simultaneously sympathizing with the robot that violated performance trust. Furthermore, it was unexpected that participants would express regret for trusting the robot that violated moral trust after just one unfavorable interaction while continuing to support the poor-performance robot even after multiple unfavorable interactions.
The significant loss of trust observed when robots violate moral trust in the task provides preliminary evidence that interactions with social robots should be designed in a way that people do not perceive any undesirable behavior by the robot as a violation of morality. This highlights the importance of making the robot’s intentions clear and ensuring there is no ambiguity in its actions during interactions with humans. In future research, it would be valuable to re-examine the topic explored in this experiment using a different experiment procedure. By doing so, we can arrive at a more conclusive understanding of the effects of moral and performance trust violations by a robot on human trust. An intriguing avenue for future research involves exploring additional trust measures that can distinguish between the gain/loss of moral and performance trust in human-robot interaction (HRI). Examining alternative measures, such as physiological measurements, which have been increasingly utilized by researchers to measure and model trust in HRI [1, 21, 23, 24, 43], may offer more profound insights into this subject. In recent years, researchers have also employed other types of measurements for assessing trust in HRI. For instance, Khalid et al. [26] incorporated facial expressions and voice features as additional measures in their trust measurement experiment. Chen et al. [8] utilized machine learning techniques to explain human behavior based on robot actions. Exploring these measurement strategies could shed light on whether they have the potential to assess the gain/loss of moral and performance trust separately.
Another valuable direction for future research is to investigate the trust repair process for each type of trust violation. It is crucial to address the question of whether people are inclined to trust robots again after a violation of moral trust. Additionally, studying the impact of various trust repair strategies proposed by studies focusing on trust repair after robot failures [32, 37, 52, 58] would be beneficial in the context of different types of robot failures. A key aspect to explore is whether similar trust repair strategies yield similar effects on human trust following different types of robot failures. In essence, it is essential to investigate whether different trust repair strategies are more effective in restoring trust after different types of trust violations by a robot.
In recent decades, the study of human-robot trust, particularly in the context of social robots, has gained significant importance due to the increasing use of robots in various real-world tasks. It is crucial to conduct more precise investigations into the factors that influence trust in social robots, as trust in social robots may differ from trust in robots not involved in social interactions. Moreover, it is important to recognize that individuals may exhibit under-trust towards social robots, regardless of their performance, which can lead to disuse or unexpected reactions towards these robots. Therefore, it is imperative to include the study of robots violating moral trust, which is particularly relevant for social applications where trust plays a significant role, as a vital research topic within the field of human-robot trust.
This research represents a step forward in comparing the effects of moral and performance trust violations by robots and examining their immediate and lasting impacts on human trust. In the future, we will delve deeper into understanding human retaliation strategies in response to different types of trust violations and explore trust repair strategies for various types of trust violations.

Acknowledgments

We thank Yosef S. Razin at the Georgia Institute of Technology for his insight into our methodology and definitions.

Footnote

References

[1]
Kumar Akash, Wan-Lin Hu, Neera Jain, and Tahira Reid. 2018. A classification model for sensing human trust in machines using EEG and GSR. ACM Trans. Interact. Intell. Syst. 8, 4 (2018), 1–20.
[2]
Naeimeh Anzabi and Hiroyuki Umemuro. 2023. Influence of social robots’ benevolence and competence on perceived trust in human-robot interactions. Japan. J. Ergonom. 59, 6 (2023), 258–273.
[3]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1 (2009), 71–81.
[4]
Jasmin Bernotat, Friederike Eyssel, and Janik Sachse. 2017. Shape it—The influence of robot body shape on gender perception in robots. In 9th International Conference on Social Robotics (ICSR’17). Springer, 75–84.
[5]
Jasmin Bernotat, Friederike Eyssel, and Janik Sachse. 2021. The (fe) male robot: How robot body shape impacts first impressions and trust towards robots. Int. J. Soc. Robot. 13 (2021), 477–489.
[6]
John K. Butler Jr and R. Stephen Cantrell. 1984. A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychol. Rep. 55, 1 (1984), 19–28.
[7]
David Cameron, Ee Jing Loh, Adriel Chua, Emily Collins, Jonathan M. Aitken, and James Law. 2016. Robot-stated limitations but not intentions promote user assistance. arXiv preprint arXiv:1606.02603 (2016).
[8]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’18). 307–315.
[9]
Na Chen, Yanan Zhai, and Xiaoyu Liu. 2022. The effects of robots’ altruistic behaviours and reciprocity on human-robot trust. Int. J. Soc. Robot. 14, 8 (2022), 1913–1931.
[10]
Vivienne Bihe Chi and Bertram F. Malle. 2023. People dynamically update trust when interactively teaching robots. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’23). 554–564.
[11]
Meia Chita-Tegmark, Theresa Law, Nicholas Rabb, and Matthias Scheutz. 2021. Can you trust your trust measure?. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’21). 92–100.
[12]
Tiffany Clark. 2018. Integrity-based Trust Violations within Human-Machine Teaming. Technical Report. Naval Postgraduate School, Monterey, CA.
[13]
Munjal Desai. 2012. Modeling Trust to Improve Human-robot Interaction. Ph. D. Dissertation. University of Massachusetts, Lowell, MA.
[14]
Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of robot failures and feedback on real-time trust. In 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI’13). IEEE, 251–258.
[15]
Munjal Desai, Mikhail Medvedev, Marynel Vázquez, Sean McSheehy, Sofia Gadea-Omelchenko, Christian Bruggeman, Aaron Steinfeld, and Holly Yanco. 2012. Effects of changing reliability on trust of robot systems. In 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI’12). IEEE, 73–80.
[16]
Kurt T. Dirks, Peter H. Kim, Donald L. Ferrin, and Cecily D. Cooper. 2011. Understanding the effects of substantive responses on trust following a transgression. Organiz. Behav. Hum. Decis. Process. 114, 2 (2011), 87–103.
[17]
Donald L. Ferrin, Peter H. Kim, Cecily D. Cooper, and Kurt T. Dirks. 2007. Silence speaks volumes: The effectiveness of reticence in comparison to apology and denial for responding to integrity-and competence-based trust violations. J. Appl. Psychol. 92, 4 (2007), 893.
[18]
Elena Fumagalli, Sarah Rezaei, and Anna Salomons. 2022. OK computer: Worker perceptions of algorithmic recruitment. Res. Polic. 51, 2 (2022), 104420.
[19]
Takayuki Gompei and Hiroyuki Umemuro. 2018. Factors and development of cognitive and affective trust on social robots. In 10th International Conference on Social Robotics (ICSR’18). Springer, 45–54.
[20]
Victoria Groom and Clifford Nass. 2007. Can robots be teammates?: Benchmarks in human–robot teams. Interact. Stud. 8, 3 (2007), 483–500.
[21]
Kasper Hald, Matthias Rehmn, and Thomas B. Moeslund. 2020. Human-robot trust assessment using motion tracking & galvanic skin response. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’20). IEEE, 6282–6287.
[22]
Peter A. Hancock, Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. De Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Fact. 53, 5 (2011), 517–527.
[23]
Wan-Lin Hu, Kumar Akash, Neera Jain, and Tahira Reid. 2016. Real-time sensing of trust in human-machine interactions. IFAC-PapersOnLine 49, 32 (2016), 48–53.
[24]
Jiali Huang and Chang S. Nam. 2020. Decoding trust in human-automation interaction: A dynamic causal modeling study. In IIE Annual Conference. Proceedings. Institute of Industrial and Systems Engineers (IISE), 1–6.
[25]
Jiun-Yin Jian, Ann M. Bisantz, and Colin G. Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cognit. Ergon. 4, 1 (2000), 53–71.
[26]
Halimahtun M. Khalid, Liew Wei Shiung, Parham Nooralishahi, Zeeshan Rasool, Martin G. Helander, Loo Chu Kiong, and Chin Ai-vyrn. 2016. Exploring psycho-physiological correlates to trust: Implications for human-robot-human interaction. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 60. SAGE Publications, Los Angeles, CA, 697–701.
[27]
Zahra Rezaei Khavas, S. Reza Ahmadzadeh, and Paul Robinette. 2020. Modeling trust in human-robot interaction: A survey. In International Conference on Social Robotics. Springer, 529–541.
[28]
Peter H. Kim, Kurt T. Dirks, and Cecily D. Cooper. 2009. The repair of trust: A dynamic bilateral perspective and multilevel conceptualization. Acad. Manag. Rev. 34, 3 (2009), 401–422.
[29]
Peter H. Kim, Kurt T. Dirks, Cecily D. Cooper, and Donald L. Ferrin. 2006. When more blame is better than less: The implications of internal vs. external attributions for the repair of trust after a competence-vs. integrity-based trust violation. Organiz. Behav. Hum. Decis. Process. 99, 1 (2006), 49–65.
[30]
John D. Lee. 2008. Review of a pivotal Human Factors article:“Humans and automation: Use, misuse, disuse, abuse.” Hum. Fact. 50, 3 (2008), 404–410.
[31]
J. David Lewis and Andrew Weigert. 1985. Trust as a social reality. Soc. Forces 63, 4 (1985), 967–985.
[32]
Gale M. Lucas, Jill Boberg, David Traum, Ron Artstein, Jonathan Gratch, Alesia Gainer, Emmanuel Johnson, Anton Leuski, and Mikio Nakano. 2018. Getting to know each other: The role of social dialogue in recovery from errors in social robots. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’18). 344–351.
[33]
Bertram F. Malle and Daniel Ullman. 2021. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction. Elsevier, 3–25.
[34]
Roger C. Mayer, James H. Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Acad. Manag. Rev. 20, 3 (1995), 709–734.
[35]
Nicole Mirnig, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, and Manfred Tscheligi. 2017. To err is robot: How humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI 4 (2017), 21.
[36]
Bonnie M. Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (1996), 429–460.
[37]
Manisha Natarajan and Matthew Gombolay. 2020. Effects of anthropomorphism and accountability on trust in human robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’20). 33–42.
[38]
Takumi Ninomiya, Akihito Fujita, Daisuke Suzuki, and Hiroyuki Umemuro. 2015. Development of the multi-dimensional robot attitude scale: Constructs of people’s attitudes towards domestic robots. In 7th International Conference on Social Robotics (ICSR’15). Springer, 482–491.
[39]
Daniel M. Oppenheimer, Tom Meyvis, and Nicolas Davidenko. 2009. Instructional manipulation checks: Detecting satisficing to increase statistical power. J. Experim. Soc. Psychol. 45, 4 (2009), 867–872.
[40]
Scott Ososky, Tracy Sanders, Florian Jentsch, Peter Hancock, and Jessie Y. C. Chen. 2014. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In Unmanned Systems Technology XVI, Vol. 9084. International Society for Optics and Photonics, 90840E.
[41]
Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. J. Behav. Experim. Fin. 17 (2018), 22–27.
[42]
Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Hum. Fact. 39, 2 (1997), 230–253.
[43]
Corey Park, Shervin Shahrdar, and Mehrdad Nojoumian. 2018. EEG-based classification of emotional state using an autonomous vehicle simulator. In IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM’18). IEEE, 297–300.
[44]
Sangwon Park. 2020. Multifaceted trust in tourism service robots. Ann. Tour. Res. 81 (2020), 102888.
[45]
Marco Ragni, Andrey Rudenko, Barbara Kuhnert, and Kai O. Arras. 2016. Errare humanum est: Erroneous robots in human-robot interaction. In 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, 501–506.
[46]
John K. Rempel, John G. Holmes, and Mark P. Zanna. 1985. Trust in close relationships. J. Person. Soc. Psychol. 49, 1 (1985), 95.
[47]
Zahra Rezaei Khavas, Russell Perkins, S. Reza Ahmadzadeh, and Paul Robinette. 2021. Moral-trust violation vs performance-trust violation by a robot: Which hurts more? arXiv e-prints (2021), arXiv–2110.
[48]
Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard, and Alan R. Wagner. 2016. Overtrust of robots in emergency evacuation scenarios. In 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI’16). IEEE, 101–108.
[49]
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. 2015. Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI’15). IEEE, 1–8.
[50]
Kristin E. Schaefer. 2016. Measuring trust in human robot interactions: Development of the “trust perception scale-HRI.” In Robust Intelligence and Trust in Autonomous Systems. Springer, 191–218.
[51]
Kristin E. Schaefer, Jessie Y. C. Chen, James L. Szalma, and Peter A. Hancock. 2016. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Fact. 58, 3 (2016), 377–400.
[52]
Sarah Strohkorb Sebo, Priyanka Krishnamurthi, and Brian Scassellati. 2019. “I Don’t Believe You”: Investigating the effects of robot trust violation and repair. In 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE, 57–65.
[53]
Anthony Selkowitz, Shan Lakhmani, Jessie Y. C. Chen, and Michael Boyce. 2015. The effects of agent transparency on human interaction with an autonomous robotic agent. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 59. SAGE Publications, Los Angeles, CA, 806–810.
[54]
T. B. Sheridan. 1989. Trustworthiness of command and control systems. In Analysis, Design and Evaluation of Man–Machine Systems 1988. Elsevier, 427–431.
[55]
John J. Skowronski and Donal E. Carlston. 1987. Social judgment and social memory: The role of cue diagnosticity in negativity, positivity, and extremity biases. J. Person. Soc. Psychol. 52, 4 (1987), 689.
[56]
John J. Skowronski and Donal E. Carlston. 1989. Negativity and extremity biases in impression formation: A review of explanations. Psychol. Bull. 105, 1 (1989), 131.
[57]
Nichole D. Starr, Bertram Malle, and Tom Williams. 2021. I need your advice... Human perceptions of robot moral advising behaviors. arXiv preprint arXiv:2104.06963 (2021).
[58]
Sarah Strohkorb Sebo, Margaret Traeger, Malte Jung, and Brian Scassellati. 2018. The ripple effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in human-robot teams. In ACM/IEEE International Conference on Human-Robot Interaction (HRI’18). 178–186.
[59]
Jukka Sundvall, Marianna Drosinou, Ivar Rodríguez Hannikainen, Kaisa Elovaara, Juho Halonen, Volo Herzon, Robin Kopecky, Michaela Jirout Košová, Mika Koverola, Anton Kunnari, Silva Perander, Teemu Saikkonen, Jussi Palomäki, and Michael Laakasuo. 2021. Innocence over utilitarianism-Heightened Moral Standards for Robots in Rescue Dilemmas PREPRINT. (2021).
[60]
Peter E. Swift and Alvin Hwang. 2013. The impact of affective and cognitive trust on knowledge sharing and organizational learning. Learn. Organiz. 20, 1 (2013), 20–37.
[61]
Daniel Ullman and Bertram F. Malle. 2019. Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust. In 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI’19). IEEE, 618–619.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 13, Issue 2
June 2024
434 pages
EISSN:2573-9522
DOI:10.1145/3613668
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-Share Alike International 4.0 License

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 June 2024
Online AM: 12 March 2024
Accepted: 15 February 2024
Revised: 01 January 2024
Received: 30 June 2022
Published in THRI Volume 13, Issue 2

Check for updates

Author Tags

  1. Moral trust
  2. performance trust
  3. trust violation in HRI
  4. multidimensional trust
  5. human-robot trust
  6. human- robot interaction
  7. moral trust violation
  8. performance trust violation

Qualifiers

  • Research-article

Funding Sources

  • Army Research Lab
  • US Army DEVCOM

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 1,072
    Total Downloads
  • Downloads (Last 12 months)1,072
  • Downloads (Last 6 weeks)243
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media