Nothing Special   »   [go: up one dir, main page]

2023 - How and When Can Robots Be Team Members

Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Article

Group & Organization Management


2023, Vol. 48(6) 1666–1744
How and When Can © The Author(s) 2022
Article reuse guidelines:
Robots Be Team sagepub.com/journals-permissions
DOI: 10.1177/10596011221076636
Members? Three journals.sagepub.com/home/gom

Decades of Research on
Human–Robot Teams

Franziska Doris Wolf1 and


Ruth Maria Stock-Homburg1

Abstract
Artificial intelligence and robotic technologies have grown in sophistication
and reach. Accordingly, research into mixed human–robot teams that
comprise both robots and humans has expanded as well, attracting the at-
tention of researchers from different disciplines, such as organizational be-
havior, human–robot interaction, cognitive science, and robotics. With this
systematic literature review, the authors seek to establish deeper insights into
existing research and sharpen the definitions of relevant terms. With a close
consideration of 150 studies published between 1990 and 2020 that in-
vestigate mixed human–robot teams, conceptually or empirically, this article
provides both a systematic evaluation of extant research and propositions for
further research.

Keywords
mixed human–robot team, technology, team dynamics/processes, intra-team
dynamics, inter-team dynamics, robotic teammate, robotic leader, robotic
team assistant, robotic roles, overview

1
Chair for Marketing and Human Resource Management, Technical University of Darmstadt,
Darmstadt, Germany
Corresponding Author:
Franziska Doris Wolf, Chair for Marketing and Human Resource Management, Technical
University of Darmstadt, Hochschulstraße 1, 64289, Darmstadt, Germany.
Email: franziska.wolf@bwl.tu-darmstadt.de
Wolf and Stock-Homburg 1667

In many current work settings, humans partner with robots to accomplish tasks
in various fields. Many of these robots can be classified as social robots, which
interact with humans in natural ways that feature speech, gestures, and facial
expressions (Breazeal, 2003). Unlike industrial robots, they work like unique,
contributing members of organizations and so-called human–robot teams
(HRTs) (Hoffman & Breazeal, 2004).
The presence and uses of such teams are growing, especially in the face of
the various restrictions imposed by the COVID-19 pandemic (Scassellati &
Vázquez, 2020). An estimated 82% of business leaders already believed in
2018 that HRTs would be a daily reality within 5 years (Dell Technologies,
2018); when we recently surveyed 596 U.S. employees1 (65% men, mean
age = 36.92 years, SD = 10.85 years), we learned that they could easily
imagine working with a robot as teammate (39%), team assistant (50%), or
even team leader (34%). For example, robots can track projects, perform real-
time scheduling, and support complex organizational decision-making
processes.
Even as these uses and imagined applications expand though, research on
HRTs remains limited by disciplinary siloes. That is, the concept is in-
terdisciplinary, but we lack summary assessments of existing knowledge
about or common definitions used in relation to HRT across each individual
discipline. Nor do we have a sense of which factors or team member char-
acteristics inform the ways of working and outcomes of such HRTs. With this
review, we attempt to systematically synchronize extant definitions and detail
prior research on HRTs according to its theoretical perspectives, empirical
design, and major findings.
We focus on embodied robots, which we define as physical repre-
sentations of AI in a physical world that recognize their environment and
can interact with it (Bradshaw et al., 2009; Fong et al., 2003; High-Level
Expert Group on Artificial Intelligence, 2019; Wolf & Stock-Homburg,
2021).2 For the review, we conducted online searches using Google
Scholar and EBSCO but also reviewed journals and conference pro-
ceedings related to human–robot interactions. As detailed in
Supplementary Appendix A, we searched 17 conferences and 40 journals.
Most of them come from the fields of HRI, robotics, and computer science.
We manually assessed each study type, embodiment form, robot level,
focus topic, and team size and applied various related exclusion criteria.
Ultimately, we reviewed 150 relevant studies, published between 1990 and
May 2020 (for further details on the study selection, see Figure 1 and
Supplementary Appendix A). This review attempts to provide answers on
two questions:
1668 Group & Organization Management 48(6)

Figure 1. Overview of reviewed, included, and excluded studies. Note: (1) Please
see Supplementary Appendix A for more details on the exclusion criteria. (2) In total,
we reviewed 150 studies in detail. Details on the 24 studies considering dyadic task
teams can be found in Supplementary Appendix B.

1. How are human-robot teams defined in prior literature?


2. Which intra-member characteristics, inter-member characteristics, and
contingency factors influence the input–process–output relationships
in HRTs?

Proposed Typologies, Definitions, and


Review Framework
Robot Typology
A vast multitude of robot typologies have been developed in the last various
efforts to categorize robots (for an overview, see Onnasch & Roesler, 2020).
We propose a business-oriented robot typology (Figure 2), which depends on
two main dimensions: social interaction intensity (Breazeal, 2003; Deng et al.,
2019; Fong et al., 2003; Nass et al., 1994) and robot morphology (Onnasch &
Roesler, 2020). In this context, we understand social interaction as the ap-
plication of social models to the interaction with a robot.3 Across the two
dimensions of our typology, we can identify four categories of robots that are
particularly relevant to business contexts:
Wolf and Stock-Homburg 1669

Figure 2. Robot typology with selected examples from literature. Notes: Due to
anthropomorphism, robots can be attributed more prominent human (social)
characteristics than they originally were designed to include (see arrows). Picture
sources: Sociable Trash Box, Pepper, Johnny, TIAGo, Robonaut: all from ABOT
database (http://abotdatabase.info//); Care-o-bot: Fraunhofer IPA (https://www.care-
o-bot.de/de/care-o-bot-3/download/images.html); Roomba: iRobot (https://shop.
irobot.de/roomba-staubsstaubsaugerroboter-roomba-606/R606040.html); NIFTi
ground vehicle: Kruijff et al., 2014; Elenoide: leap in time GmbH Darmstadt.

• Machine-like robot with low social interaction: Robots like the Roomba
vacuum (Forlizzi & DiSalvo, 2006) or the NIFTi ground vehicle (Kruijff,
Kruijff-Korbayová, et al., 2014) are designed primarily with functionality
in mind.
• Human-like robot with low social interaction: Robots like Johnny 05
(SIM TU Darmstadt, 2021), TIAGo (Pages et al., 2016), and Robonaut
(Bluethmann et al., 2003) are humanoid robots with legs, arms, and
heads. Despite this physical appearance, these robots are designed pri-
marily to fulfill intended (work) tasks, not to engage in social interaction.
• Machine-like robot with high social interaction: Robots in this category,
like the Sociable Trash Box (Yamaji et al., 2011) and the Care-O-Bot
(Kittmann et al., 2015), lack a human physical appearance but can elicit
social responses (Schmitt et al., 2017).
1670 Group & Organization Management 48(6)

• Human-like robot with high social interaction: Robots like Elenoide


(Stock et al., 2019) or Pepper (Pandey & Gelin, 2018) look very similar
to actual humans and have strong social skills, including emotion
recognition.

Team Typology
All-Human Teams. A common agreement defines human teams as collectives
of three or more people (Stock, 2003), “who (a) exist to perform organiza-
tionally relevant tasks, (b) share one or more common goals, (c) interact
socially, (d) exhibit task interdependencies . . . , (e) maintain and manage
boundaries, and (f) are embedded in an organizational context” (Kozlowski &
Bell, 2003, p. 334). They are dynamic, at three main levels (de Wit & Greer,
2008; DeChurch et al., 2013): “tasks (i.e., goals, ideas, and performance
strategies), . . . relationships (i.e., personality clashes, interpersonal styles)”
(DeChurch et al., 2013, p. 560), and the processes used to manage or achieve
teamwork (de Wit & Greer, 2008).

Human–Robot Teams. Although HRTs have been investigated widely—e.g.,


as “robot[s] as team member[s]” (HRI’07, 2007) and “robots in groups and
teams” (Jung et al., 2017)—no universal definition exists that reflects and is
accepted by the broad range of disciplines that feature research in related
topics (Figure 3). Therefore, our first research question attempts to provide an
understanding of what one is talking about when referring to this concept to
avoid meaning different things under the same name (see, e.g., Kelley, 1927;
Marsh et al., 2019).
In particular, ongoing discussion centers on whether the minimum required
HRT size should be two or three members. Dyads with just two members
represent very specific constellations, and they lack the dynamics that are of
core interest for HRT research (Abrams & der Pütten, 2020). Yet we found
many conceptual and empirical studies that claim to investigate HRTs by
studying dyadic teams. Another dimension in which researchers differ per-
tains to whether they focus on pure task interactions or on both task and social
interactions within the team. Among the 93 reviewed studies that focus on
HRTs with multiple members, we derive several elements pertaining to the
composition of HRTs; Table 1 classifies extant research on HRTs according to
the team type and composition it considers, along with the definitions it offers.
As it shows, many researchers investigate human-directed robot teams
(especially for USAR tasks) or autonomous mixed teams with no clearly
assigned leadership. Relatively few empirical studies address human- or robot-
directed mixed HRTs, and we find no studies of robot-directed human teams.
Wolf and Stock-Homburg 1671

Figure 3. Overview of HRT definitions. Note: Definitions of HRTs include


a narrow perspective of HRTs as multiple-member collaborative teams (1), and
a broader perspective of HRTs as multiple-member task teams (2), dyadic
collaborative teams (3), or dyadic task teams (4). The team types (1)–(3) are
discussed in detail in this manuscript. Details on the dyadic task teams (4) can be found
in Supplementary Appendix B.

From these dimensions, four constellations of HRTs can be derived


(Figure 3): (1) HRTs as multiple-member collaborative teams. That is, a mixed
human–robot team (HRT) consists of at least three members (humans and
robots) who perform joint tasks interdependently and interact socially to
achieve common goals. (2) Multiple-member task teams have more than two
members that focus primarily on task interaction. These teams can be found in
space and USAR contexts, where robots work in teams with humans to
increase efficiency and safety (Bluethmann et al., 2003). (3) Dyadic col-
laborative teams are human–robot dyads that interact interdependently, both
socially and on a task level to achieve their common goals (Breazeal,
Hoffman, & Lockerd, 2004). (4) Dyadic task teams only engage in task
interaction to reach their goals, e.g., in manufacturing a car (Liu & Tomizuka,
2014).
Two perspectives can be differentiated regarding HRTs: a broader and
a narrow perspective. From a broader perspective, HRTs fall into the three
categories of multiple-member task teams, dyadic collaborative teams, and
dyadic task teams. These broader perspectives, however, go beyond our
narrow understanding of HRTs and rather aim at capturing the various per-
spectives in extant robotic research (in this review we do not consider the
1672

Table 1. Overview of Different Team Compositions, Sample Definitions, and Related Research.

Team
Team Type Composition Sample Definition Related Empirical Research

Human-directed “a single human operator can oversee and Management: Pina et al., 2008; Sellner et al.,
robot team flexibly intervene in the operation of a team 2006
of largely autonomous robots” (Sellner et al., Cognitive science: J. Wang et al., 2008; You &
2006, p. 1425). Robert, 2016, 2017, 2019a, 2019b
HRI: Alboul et al., 2008; Crandall et al., 2003;
Goodrich et al., 2007
Military: Brown et al., 2005
Robotics: Zheng et al., 2013
(Urban) search and rescue: Burke & Murphy,
2004, 2007; Kantor et al., 2006; Lee et al.,
2010; Ranzato & Vertesi, 2017; H. Wang
et al., 2010; Yazdani et al., 2016
Human-/Robot- “human workers . . . perform physical tasks in HRI: Law et al., 2020
directed mixed coordination with robotic partners” and Management: Gombolay, Gutierrez, et al.,
team “human and robot co-leaders [have] identical 2015, referring to human and robotic co-
functions and capabilities, by restricting the leads and human assistants; Gombolay,
human co-leaders’ capabilities such that they Huang, & Shah, 2015, referring to human
were the same as those of the robot” leader, robotic and human assistants
(Gombolay, Gutierrez, et al., 2015, pp. 295–
296)

(continued)
Group & Organization Management 48(6)
Table 1. (continued)

Team
Team Type Composition Sample Definition Related Empirical Research

Robot-directed “the partner [robot] . . . is instructing the N/A; the only studies with such a team
human team primary human . . . on the task steps to composition refer to robot-directed dyadic
complete. There are no shared decision task teams
making tasks” (Harriott et al., 2011, p. 46)a
Wolf and Stock-Homburg

Autonomous mixed “humans and robots [work] together to Cognitive science: Correia, Mascarenhas, et al.,
team accomplish complex team tasks” (Dias et al., 2019; Jung et al., 2015; Strohkorb Sebo et al.,
2008, p. 1) 2020; Traeger et al., 2020
HRI: Gervits et al., 2020; Kwon et al., 2019;
Tang & Parker, 2006
Robotics: Claure et al., 2020; Iqbal & Riek, 2017;
Marge et al., 2009
Space: Fong et al., 2005; Fong et al., 2006
(Urban) search and rescue: Dias et al., 2008; Jung
et al., 2013

Note: Team composition: s = human, = robot. The studies (with team sizes of at least n = 3) are categorized according to a best fit approach, so they

might feature aspects of more than one research discipline. Overview over related empirical research is not exhaustive.
a
Harriott et al. (2011) only consider a dyadic task team.
1673
1674 Group & Organization Management 48(6)

dyadic task teams in depth, instead see Supplementary Appendix B). From
a narrow perspective, combining the insights gleaned from all-human team
definitions (e.g., Stock, 2004) and robotic research, we define HRTs as
multiple-member collaborative teams.4

Proposed Framework
In this overview, we rely on an input–process–output (IPO) model (Gladstein,
1984; You & Robert, 2018b; see Figure 4). Categories 1 and 2 focus on two
important input factors: intra-member team characteristics, such as the
(physical) robot design, robot behavior, or human preferences and behavior
(Category 1), and inter-member team characteristics, including team com-
position, autonomy, and leadership (Category 2). Category 3 includes studies
of team processes (Barrick et al., 1998; Gladstein, 1984) like (physical)
coordination, communication, collaboration, and trust. The studies in these
categories affect team outputs (Barrick et al., 1998) as “psychological and
business-related outcomes produced by teams” (Stock, 2004, p. 277). Studies
in Category 4 investigate moderating effects on input, process, and output.
Finally, some studies depict causal chains (Stock, 2004) from the inputs
through mediators to outputs (Category 5). The coding scheme used to
classify studies is explained in detail in Supplementary Appendix A.

Figure 4. Proposed framework (adapted from Stock, 2004).


Wolf and Stock-Homburg 1675

Conceptual and Empirical Findings Related to Human–


Robot Teams
Category 1: Effects of Intra-Member Team Characteristics
Focus Areas and Major Findings. Table 2, 3 and 4, summarize the studies in this
category in terms of the robot characteristics studied, the definition of HRT,
samples, and key findings. Research on (physical) robot design for both
dyadic HRTs and multiple-member HRTs tends to center on robotic hardware
(e.g., degrees of freedom of components; Bluethmann et al., 2003), physical
appearances in terms of anthropomorphism or robot size (Bartneck et al.,
2006), or uses of gestures and facial expressions (Minato et al., 2004). The
physical design of the robot is the explicit focus of conceptual and empirical
studies on Robonaut, which was designed by NASA to be deployed in HRTs
devoted to space exploration (e.g., Bluethmann et al., 2003; Fong et al., 2005).
These studies address the physical design features needed to execute its

Table 2. Conceptual Studies on Multiple-Member HRTs Related to Intra-Member


Team Characteristics and Their Effects.

Author/Subcategory/Disciplinea/
Team Interactionb Key Findingsc,d
Ambrose et al. (2000)/(physical) • Overview of the design of NASA’s Robonaut
robot design/VI/T
Bluethmann et al. (2003)/ • Information on the design of NASA’s Robonaut
(physical) robot design/VI/T
Fong et al. (2005)/(physical) robot • Proposal of interaction framework “Human–
design/VI/T Robot Interaction Operating System” (HRI/OS)
• Proposal of metrics for evaluation of HRTs
Kelly and Watts (2017)/robot • Position paper that suggests that task-related
behavior/V/T+S “inefficiency” in the form of social behavior
should be considered when designing social
robots

Note: aDisciplines: I = HRI, II = Management, III = military, IV = Cognitive science, V = robotics, VI


= space, VII = (urban) search and rescue, VIII = ethics; Studies are categorized based on a “best
fit”-approach and might comprise aspects of more than one considered research discipline.
b
Team interaction: T = task interaction, T+S = task and social interaction.
c
None of the studies specified underlying theories. In part of the studies, the robot morphology,
robot level, and type of embodiment are not specified. The two studies that provide information
(Bluethmann et al., 2003; Fong et al., 2005) use the physical, humanoid “Robonaut” robot on
a lower or same level as humans.
d
In most of the studies, the team setup is not specified. Only Kelly and Watts (2017) specify that
they focus on a human-directed robot team.
1676

Table 3. Empirical Studies on Multiple-Member HRTs Related to Intra-Member Team Characteristics and Their Effects.
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Claure et al.(2020)/ n.i./↔/n.i. T+S n = 282/156 f, 124 m, 1 Equity models, • Robot fairness • User trust (+ /n.s.) • "Fairness of resource
robot behavior/V other, 1 not fairness • Perceived robot fairness allocation has significant
disclosed; age: M = theory (n.s.) effect on user’s trust in the
36 years (SD = 11); system” (p. 299)
mTurk; from the US/
C

Correia, Mascarenhas, Humanoid (EMYS T+S n = 70/32 f, 37 m, 1 n.i. • Prosocial robot • Perceived robot social • Prosocial robots are rated
et al. (2019)/robot robotic head)/ unknown; age: range behavior attributes (+) more positively in terms of
behavior/IV ↔/Physical robot 22–62 years (M = their social attributes
34.6, SD = 11.557)/ (p.143)
C • "The perception of
competence, the
responsibility attribution
(blame/credit) and the
preference for a future
partner are only
significantly different in the
losing condition” (p.143)

(continued)
Group & Organization Management 48(6)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Correia, Petisca, et al. Humanoid(“Emys”, T+S n_1 = 30, n_2 = 61/1: Learning goal • Robot goal • Competitiveness Index • "When a partner is chosen
(2019)/human “Glin”; both identical 17m, age: range 19– theory orientation (higher for without previous
preferences and physical appearance: 42 years, M = 23.03 (performance- performance-driven) partnering experience,
behavior/IV EMYS)/↔/Physical (SD = 4.21), driven vs. learning- • McGill Friendship people tend to prefer
robot university students; driven) Questionnaire (higher robots with relationship-
2: 38 m, age: range for learning-driven) driven characteristics as
Wolf and Stock-Homburg

17–32 years, M = • Relationship Assessment their partners compared


23.66 (SD = 3.24), Scale (higher for with competitive robots”
59 university learning-driven) (p. 1)
students, 2 worker/ • Godspeed Questionnaire • "After some partnering
C/L (3 sessions in (n.s./higher for learning- experience has been
direct succession) driven) gained, the choice becomes
less clear and additional
driving factors emerge: (2a)
participants with higher
levels of competitiveness
(personal characteristics)
tend to prefer Emys [the
performance-driven
robot], whereas those with
lower levels prefer Glin
[the learning-driven robot],
and (2b) the choice of
which robot to partner
with also depends on team
performance, with the
winning team being the
preferred choice.” (p. 1)

(continued)
1677
Table 3. (continued)
1678

Robot Morphologyb/ Data Basis/


Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Fraune et al. (2020)/ Functional (Sociable T+S n_1 = 630, n_2 = 71/1: Social identity • Robot behavior • Anthropomorphism of • Social robot-robot behavior
robot behavior/I Trash Box (STB))/ from USA (n = 333, theory toward robots robot (partially + for increases
n.i./Image/video of 47% f, age M = (none, social, robot–robot social, n.s. anthropomorphism, social
a robot, physical 24.59, SD = 9.59) functional) for other conditions) robot-human behavior
robot and Japan (n = 297, • Robot behavior • Emotional and behavioral increases positive emotions
7% f, age M = 21.55, toward humans intention about robot and willingness for
SD = 3.35), (social, functional) (n.s.) interactions (p.1)
recruited in • Country (US, Japan) • Entativity of robot (n.s.) • Robots that are designed for
universities; 2: from positive human interaction
USA (42% f, age M = • Robot behavior • Cooperation (n.s.) resp. to be perceived
19.20, SD = 1.30), toward robot • Anthropomorphism of intelligent should behave
recruited from (social, functional) robot (partially + or - socially towards humans
university/C • Robot behavior for robot-robot social) resp. also towards robots
toward human • Emotional and behavioral (p.1)
(social, functional) intention about robot (+
for robot-human social)
• Entativity of robot
(partially + for robot-
human functional)
Gombolay et al. Functional (Willow T+S n_1 = 17, n_2 = 18, Situational • Degree of robotic • Situation awareness ( ) • “human participants’
(2017)/robotic Garage PR2 n_3 = 20/all: awareness autonomy in awareness of their team’s
behavior, human platform)/↓/Physical recruited form local scheduling decisions actions decreased as the
preferences and robot university; 1: 6 m, degree of robot autonomy
behavior/I age: range 18–25 • Degree to which • Preference to work with increased” (p. 614)
years, M = 19.5 (SD participant’s robot (+) • “participants preferred
= 1.95); 2: 10 m, age: preferences are working with a robot that
range 19–45 years, respected by included their preferences
M = 27 (SD = 7); 3: robotic teammate when scheduling and . . .
10 m, age: range 18– • Degree to which • Preference to work with preferred working with
30 years, M = 21 participant’s robot (+, +) a robot that utilized them
(SD = 3)/C preferences are more frequently” (p. 613)
respected by
robotic teammate
• Participant utilization
Group & Organization Management 48(6)

(continued)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Gombolay, Huang, Functional (Willow T+S n = 17/n.i./C n.i. • Consideration of • Willingness to work (+) • Humans prefer working
and Shah (2015)/ Garage PR2 human with a robotic team mate
robot behavior, platform)/↓/Physical preferences that considers their
human preferences robot preferences
and behavior/II • Team efficiency has to be
kept in mind when
Wolf and Stock-Homburg

allocating decision-making
authority (robot taking
decisions can lead to
decreased efficiency and
belief that the robot is
unaware of team goals)
Jiang and Wang n.i./n.i./n.i. T+S n.i./n.i./n.i. Regret theory • Robot decision • Teaming performance (+) • More human-like decision-
(2019)/robot making (regret- making by robots can help
behavior/V decision model) to balance workload and
performance in HRTs

(continued)
1679
Table 3. (continued)
1680

Robot Morphologyb/ Data Basis/


Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Law et al. (2020)/ Humanoid (Willow T+S n_1 = 198, n_2 = 421/ Emotional • Robot emotional • Trust in robot • Robotic EI influences trust in
(physical) robot Garage PR2)/ 1: 95 f, 1 other, age: intelligence, intelligence (+) a robot (p. 1)
design, robot ↔/Image/video of range 18–77 years social role • Robot gender (+, • "Gender stereotypical
behavior/I robots (M = 34.96, SD = theory male) expectations related to EI
11.47); 2: 162 f, 3 • Vignette [are] transferred to trust”
other, age: range 18– presentation (n.s.) (p. 1)
81 years (M = 36.52,
SD = 11.85); both: • Robot emotional
mTurk/C intelligence (+)
• Robot gender (+,
male)
• Vignette
presentation (+,
text)
• Participant gender
(n.s.)
• Participant age ( )

• Robot • Perceived robot EI


trustworthiness
(n.s.)
• Robot gender (n.s.)

• Robot • Trust in robot


trustworthiness
(+)
• Robot gender (n.s.)
• Vignette
presentation (+,
text)
• Order of
questionnaires (+,
EI first, then trust)
Group & Organization Management 48(6)

(continued)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Lei and Rau (2020)/ Humanoid (Nao)/ T+S n = 60/30 f; age: M = CASA • Task outcome (n.s.) • Attribution of blame to • Gender effects play a role in
human preferences ↔/Physical robot 22.2 years (SD = paradigm, • Human gender robot the attribution of credit and
and behavior/IV 2.29); (under-) common ( /+) • Attribution of credit to blame to robot team
graduate students sense robot members
(40%/60%)/C psychology, • "participants attributed
gender more credit and less blame
Wolf and Stock-Homburg

studies to the robot member than


to themselves” (p. 1)
• "the robot member was
more blamed than the
human member, whereas
they received similar levels
of credit” (p. 1)
Strohkorb Sebo et al. Humanoid (Nao)/↔ T+S n = 105 (in 35 teams)/ Trust theories • Robot vulnerability • Team member • Robots making vulnerable
(2018)/robot (not specified)/ experimental interactions with robot statements lead to
behavior/IV Physical robot condition: 26/54 m, (+) increased engagement with
age: M= 20.13 years • Perceived psychological the robot
(SD = 7.13); control safety (n.s.) • In groups with robots
condition: 15/51 m, • Team member making vulnerable
M = 21.333 years interactions with other statements, human
(SD = 11.00); human team members teammates take more
recruited from (+) actions to reduce tension
university campus experienced by the team
and surrounding (e.g., explain failures, laugh
town and summer together)
program./C

(continued)
1681
Table 3. (continued)
1682

Robot Morphologyb/ Data Basis/


Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Traeger et al. (2020)/ Humanoid (Nao)/ T+S n = 153 (in 51 groups n.i. • Robot vulnerability • Team member • "people in groups with
robot behavior/IV ↔/Physical robot of 3 each)/vulnerable interactions with other a robot making vulnerable
condition: 28 f, 26 m, human team members statements converse
age: M = 20.13 years (+) substantially more with
(SD = 7.13); neutral • Total talking time (+) each other, distribute their
condition: 36 f, 15 m, • Team perception (+) conversation somewhat
age: M = 21.33 years • Conversation equality (+) more equally, and perceive
(SD = 11.01); silent their groups more
condition: 31 f, 17 m, positively compared to
age: M = 23.94 years control groups with
(SD = 7.36)/C a robot that either makes
neutral statements or no
statements at the end of
each round” (p. 6370)

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n. i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.

f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1683

intended tasks (e.g., hands) and requirements for controlling Robonaut in


harsh conditions (Bluethmann et al., 2003).
The second dimension, robot behavior, appears in both conceptual and
empirical research on dyadic and multiple-member HRTs. Conceptual studies
tend to focus on “inefficient” robots that are designed not to increase the
efficiency of an HRT but rather to exhibit social behavior, to facilitate their
integration into both teams and wider society (Kelly & Watts, 2017). Em-
pirical studies on multiple-member HRTs pertain to prosocial behaviors
(Correia, Mascarenhas, et al., 2019), fair resource allocations (Claure et al.,
2020), social robot–robot and robot–human behavior (Fraune et al., 2020),
vulnerable robotic utterances (Traeger et al., 2020), and vulnerable robotic
behavior (Strohkorb Sebo et al., 2018). All these features signal a robot’s
seeming personality, in line with the Computers Are Social Actors (CASA)
paradigm and its prediction that humans treat robots as social entities (Nass
et al., 1994). Such behaviors exert positive effects on robot or team per-
ceptions or processes. In particular, prosocial robotic behavior enhances users’
perceptions of and behaviors toward robots, prompts better social attribute
ratings (Correia, Mascarenhas, et al., 2019), and leads humans to become
more engaged (Strohkorb Sebo et al., 2018). Empirical studies on dyadic
HRTs that study specific prosocial robotic behaviors such as explanations
(Hiatt et al., 2011; Wang et al., 2016a; 2016b; Wang et al., 2018) or inter alia
apologies in the case of errors (Natarajan & Gombolay, 2020) reveal positive
effects of such behaviors on trust and team performance. Finally, robot touch
appears to lead to better ratings of the social performance, skills, and fairness
of a robot (Arnold & Scheutz, 2018). However, perceptions of robot touch
need to be considered in the context of the interaction; for example, gender
effects might arise and have important influences.
The human preferences and behaviors dimension has not been studied
broadly in an HRT context, possibly because it gets addressed more com-
monly in relation to HRI or HRC. We find one recent study of attributions of
blame and credit, using a multiple-member HRT (Lei & Rau, 2020), which
reveals that human team members “attributed more credit and less blame to the
robot member than to themselves” (p. 1). Another study deals with mem-
bership preferences in HRTs (Correia, Petisca, et al., 2019), demonstrating that
people who exhibit greater competitiveness prefer a performance-driven robot
over a learning-driven one.
Finally, four studies of multiple-member or dyadic HRTs investigate in-
terplays among the three subcategories (Gombolay et al., 2017; Gombolay,
Huang, & Shah, 2015; Law et al., 2020; Richert et al., 2016): Thus, different
dimensions of intra-member team characteristics appear intertwined and
important for understanding the role of robots in team contexts.
1684

Table 4. Empirical Studies on Dyadic HRTs Related to Intra-Member Team Characteristics and Their Effects.
Robot Morphologyb/ Team
Author/Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/ Underlying Independent Dependent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Variable(s)h Key Findings

Arnold and Scheutz Humanoid/↔/Image/ T+S n = 332/135 f, age: 44.43 n.i. • Positive robot • Perceived robot • Robot touch leads to better
(2018)/robot video of a robot years; mTurk; US attitude capability rating of the social
behavior/I citizens/C • Robot touch (+/n.s.) performance, skills, fairness of
• Confidence in a robot
robot skills (+) • However, gender effects from
• Perceived robot survey responses show that
qualification (+ robot touch has to be
/n.s.) considered with caution as the
• Perceived robot context and expectations from
fairness (+) society can lead to
a significantly varying
perception of robot touch
Bartneck et al. (2006)/ Android (Tron-X, T+S n = 12/age: range 21–54 CASA • Human-/animal- • Praise (+) • The study results lead to the
(physical) robot PKD), animal-like years (M = 29.9); paradigm, likeness of robot • Punishment ( ) conclusion that the CASA
design/I (AIBO)/↔/Image/ Masters’s and Ph.D. Uncanny paradigm holds true for
video of a robot students in valley computers
Psychology or paradigm • Robots on the other hand were
Engineering/within treated differently depending
subject design, C on their physical appearance:
very human-like or animal-like
robots were praised more and
punished less than computer
and human, machine-like
robots were treated like
computer and human

(continued)
Group & Organization Management 48(6)
Table 4. (continued)
Robot Morphologyb/ Team
Author/Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/ Underlying Independent Dependent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Variable(s)h Key Findings

Hiatt et al. (2011)/ Humanoid (Mobile, T+S n = 35 / n.i. / n.i. Theory of mind • Robot explanation • Perceived robot • A robot that uses a theory of
robot behavior/IV Dexterous, Social naturalness (+) mind (ToM) approach and
(MDS) robot)/ • Perceived robot offers explanations is perceived
↔/Image/video of intelligence (+) both more intelligent and
a robot natural than a robot that either
shows only simple correction
Wolf and Stock-Homburg

or blindly follows a human (p.


2066)
• To utilize the ToM-approach,
the robot analyzes different
models of human partners and,
in case it finds a likely cause of
unexpected behavior,
articulates his findings (p. 2071)
Natarajan and Functional (Sawyer), T+S n = 75/51.47% f; age: n.i. • Perceived • Trust • Behavior and
Gombolay (2020)/ humanoid (Kuri, range 18–58 (M = anthropomorphism anthropomorphism of the
robot behavior/I Pepper, Nao)/n.i./ 25.298, SD = 8.457); (+) agent are the most significant
Physical robot, from university/C • Robot behavior (+) factors in predicting the trust
video/image of (4x4x2x2; between- • Robot presence and compliance with the
a robot (as and within-subject) (n.s.) robot” (p. 33)
condition of • Coalition building
experiment) preface (n.s.)
Richert et al. (2016)/ Functional, T+S / n.i n.i. / n.i. / n.i. CASA • Personal • Task • Proposal of experiments to gain
(physical) robot humanoid/n.i./ paradigm, characteristics performance insights into cooperation
design, robot Simulation/virtual embodiment • Robot (not reported) between humans and robots
behavior/II robot theories characteristics based on robot appearance and
robot accuracy
N. Wang et al. (2016a)/ Functional/n.i./ T+S n = 220/mTurk, USA/C n.i. • Robot explanations • Transparency • A better understanding of
robot behavior/III Simulation/virtual (+) decision-making processes of
robot • Trust (+) a robot can help improve trust
• Performance (+) • Explanations based on POMDP
(Partially Observable Markov
1685

Decision Processes) can be


a way to achieve this goal

(continued)
Table 4. (continued)
1686

Robot Morphologyb/ Team


Author/Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/ Underlying Independent Dependent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Variable(s)h Key Findings

N. Wang et al. (2016b)/ Functional/n.i./ T+S n = 220/mTurk, USA/C n.i. • Robot explanations • Transparency • A better understanding of
robot behavior/III Simulation/virtual (between-subject) (+) decision-making processes of
robot • Trust (+) a robot can help improve trust
• Performance (+) in HRTs (similar experiment as
in “The impact of POMDP-
generated explanations on
trust and performance in
human-robot teams")
N. Wang et al. (2018)/ Animal-like, T+S n = 61/14 f; age: range n.i. • Embodiment (n.s.) • Trust • Explanations by robots (even if
robot behavior/IV functional/ 18–23 (M = 19.2); • Communication • Transparency they don’t indicate which
↓/Simulation/ years higher- strategy in case of • Transparency components of a robot are
virtual robot education military error (n.s.) test score faulty) have significant effects
(online HRI test school in the US, • Explanations (+) • Compliance on transparency and self-
bed) participants received • No. of correct reported trust of participants
extra course credit decisions made and result in better decision-
for participation/C (2 making of a human team mate
sessions, 120 mins • Robot embodiment and
total, 8 missions) acknowledgement of mistakes
only have a marginally or no
significant impact on self-
reported trust, transparency or
correct decisions

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, □= robot.
f
Participants: f = female, m = male.
g
Group & Organization Management 48(6)

Time frame: C = cross-sectional, L = longitudinal.


h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Wolf and Stock-Homburg 1687

Disciplines, Study Characteristics, and Underlying Theories. We assign 23 studies


to this category (15 of multiple-member HRTs, 8 of dyadic HRTs); most of
them rely on cognitive science (7 studies), space (3), or robotics (3) foun-
dations. Very little research in HRI or management includes teams with at least
three team members; 4 studies from HRI and 1 that reflects a management-
related perspective take dyadic perspectives though. Only one study shows
a longitudinal approach with three sessions in direct succession (Correia,
Petisca, et al., 2019). All 19 empirical studies are conducted online or in
laboratory settings. They feature humanoid and functional robots. Several
studies do not disclose the type of robot used. Studies of multiple-member
HRTs mostly focus on collaborative teams. Teams include autonomous mixed
teams (7 studies), a human-directed robot team (1), and human/robot-directed
mixed teams (3). Studies of dyadic collaborative HRTs also consider au-
tonomous human-robot pairings (4), human-directed robots (3), or do not
disclose the team setup. About one-third of the studies indicate their theo-
retical foundation. They draw on theories such as fairness theory and equity
models (Adams, 1963, 1965), the theory of mind (Hiatt & Trafton, 2010),
emotional intelligence (Salovey & Mayer, 1990), social role theory (Hentschel
et al., 2019), and situational awareness (Endsley, 1995). Further, some re-
searchers base their work on the CASA paradigm (Nass et al., 1994).

Limitations. Most of the studies rely on student samples, with cross-sectional


laboratory designs, which limits the generalizability of their findings (Levitt &
List, 2005). No studies in this category feature real-world settings. They also
ignore dynamic developments in teams over time (Bell & Marentette, 2011)
and instead take static perspectives, suggesting the need for longitudinal
studies. Furthermore, none of these studies investigates robot-directed human/
mixed teams (see Table 1). Finally, some studies establish a sound theoretical
basis for their research, but it is important to extend this effort and perhaps
apply other behavioral theories, such as social identity theory (Tajfel, 1974).

Category 2: Effects of Inter-Member Team Characteristics


Focus Areas and Major Findings. Tables 5, 6, 7 and 8 provide an overview of the
studies in this category. In conceptual studies of the roles of humans and
robots in HRTs, we find discussions of the suitability of teams, which suggest
that robots should not replace humans but rather be treated as complements
with individual strengths (Groom & Nass, 2007). Conceptual research on
dyadic HRTs discusses parallels between all-human teams and HRTs and the
importance of shared mental models (Demir et al., 2020). Empirical research
on multiple-member HRTs checks for the ideal ratio between humans and
1688 Group & Organization Management 48(6)

Table 5. Conceptual Studies on Multiple-Member HRTs Related to Inter-Member


Team Characteristics and Their Effects.

Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d
Abrams & der Pütten In-group identification (e.g., • In-group identification,
(2020)/team social identity theory); cohesion, and entitativity (I-
perceptions/IV/T+S cohesion theories (e.g., C-E) framework can be
group development used as a theoretical basis
theory); entativity theory for research on human–
(e.g., formation of robot groups
perceived entativity) • Multi-agent groups are
similar but not the same as
all-human groups
• Dyads have unique processes
that differ from group and
team processes
Bradshaw et al. (2012)/ n.i. • Autonomy and coordination
autonomy and in human-agent-robot
control/III/T teamwork should be in the
focus of future research to
solve current problems
Dudenhoeffer et al. Shared mental models, • Simulations are widely used
(2001)/autonomy situational awareness in HRT and HRI research
and control/III/T+S and can help to gain insights
into this field, esp. when
many robots are involved
Gladden (2014)/ French and Raven’s bases of • Charismatic robotic leaders
leaderhip/I/T+S power (w/charismatic authority
being a manifestation of
referent power) will
probably emerge naturally
• Introduction of three
possible ways of charismatic
robotic leaders
Groom and Nass Shared mental models • Robots should be evaluated
(2007)/roles of as complements to human
humans and robots/ team members (rather than
III/T+S duplicates) to take
advantage of individual
abilities of humans and
robots
(continued)
Wolf and Stock-Homburg 1689

Table 5. (continued)

Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d

Manikonda et al. n.i. • Proposal of framework for


(2007)/autonomy communication and
and control/V/T collaboration in HRTs
(strong technical focus)
Musić and Hirche n.i. • Proposal of control approach
(2018)/autonomy for robot teams
and control/V/T
Nikolaidis and Shah Shared mental models • Proposal to use shared
(2012)/team mental models also for
perceptions/IV/T HRTs
Phillips et al. (2012)/ Shared mental models • "relevant human–animal
team perceptions/ team capabilities (. . .) can
III/T+S inform and guide the design
of next-generation human–
robot teams” (p. 1553)
Phillips et al. (2016)/ Shared mental models, • Human-animal teams can be
team perceptions/I/ interdependence theory used as analogous examples
T+S for the development/set-up
of effective HRTs
Samani and Cheok n.i. • Ideas on emotion-laden
(2011)/leadership/II/ robotic leadership,
T+S advantages of robotic
leaders, modes of robotic
leadership
Scheutz et al. (2017)/ Shared mental models • Proposal of formal and
team perceptions/ computational framework
IV/T+S for development and usage
of shared mental models in
HRTs based on all-human
teams
Sierhuis et al. (2003)/ n.i. • Discussion of perspective on
autonomy and teamwork and sliding
control/VI/T autonomy
Talamadupula et al. Shared mental models • Proposal of “automated
(2014)/team planning problem instance”
perceptions/V/T (p. 2957)
(continued)
1690 Group & Organization Management 48(6)

Table 5. (continued)

Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d

Yazdani et al. (2016)/ n.i. • Proposal of cognition-


autonomy and enabled robot-control
control/VII/T framework to foster a more
natural communication
between humans and
robots

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI =


space, VII = (urban) search and rescue, VIII = ethics; studies are categorized based on a “best fit”-
approach and might comprise aspects of more than one considered research discipline.
b
Team interaction: T = task interaction, T+S = task & social interaction.
c
In most of the studies, the robot morphology, robot level, and type of embodiment are not
specified. The six studies that provide information (Abrams & der Pütten, 2020; Dudenhoeffer
et al., 2001; Gladden, 2014; Manikonda et al., 2007; Talamadupula et al., 2014; Yazdani et al., 2016)
focus on functional robots (e.g., Growbot [Dudenhoeffer et al., 2001], Pioneer P3-AT
[Talamadupula et al., 2014]) and indicate different robot levels (lower/same and higher level)
and embodiments (physical robot; simulation).
d
In most of the studies, the team setup is not specified. The two studies that provide information
focus on human-directed robot teams (Musić & Hirche, 2018; Yazdani et al., 2016).

robots (Burke & Murphy, 2004), how to include humans in HRTs (Strohkorb
Sebo et al., 2020), consequences of the presence of robots on human decision-
making (Fuse & Tokumaru, 2020), and the optimal organizational structure
(Ranzato & Vertesi, 2017). Findings from these studies indicate that, inter alia,
loosely coupled teams are most successful (Ranzato & Vertesi, 2017) and
specialized interaction roles might impede the inclusion of human team
members in HRTs (Strohkorb Sebo et al., 2020).
Investigations of autonomy and control in both dyadic and multiple-
member HRTs either take a general view on the effects on teamwork
(Bradshaw et al., 2012) or a more specific focus on adjustable autonomy
(Sierhuis et al., 2003), in both conceptual and empirical efforts (e.g., Dias
et al., 2008; Gombolay, Gutierrez, et al., 2015; Goodrich et al., 2007; Sellner
et al., 2006). Findings indicate that somewhat autonomous robots and shared
control can facilitate the work of human team members and make HRTs more
efficient (e.g., Lee et al., 2010; Lewis et al., 2010; Sellner et al., 2006).
Researchers also have proposed an algorithm to predict team performance,
based on the robot’s performance in interaction with human team members or
when it is autonomous (Crandall et al., 2003), as well as various control
approaches for human-directed robot teams (Musić et al., 2019; Musić &
Wolf and Stock-Homburg 1691

Table 6. Conceptual Studies on Dyadic HRTs Related to Inter-Member Team


Characteristics and Their Effects.

Author/Subcategory/
Disciplinea/Team Underlying
Interactionb Theories Key Findingsc
Demir et al. (2020)/roles of Shared mental • “results indicate that effective team
humans and robots/VII/ models interaction and shared cognition play
n.i. an important role in human-robot
dyadic teaming performance.” (p. 1)

Note. aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI =


space, VII = (urban) search and rescue, VIII = ethics; studies are categorized based on a “best fit”-
approach and might comprise aspects of more than one considered research discipline.
b
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information
provided by author(s).
c
Robot morphology, robot level, and type of embodiment as well as team setup are not specified.

Hirche, 2018), a control framework for USAR (Yazdani et al., 2016), an “HRT
planning-execution framework” (Manikonda et al., 2007, p. 92), and a sim-
ulation framework that aims to support the development of command and
control architectures (Dudenhoeffer et al., 2001).
Conceptual studies of leadership in HRTs cite potential stereotypes of
robotic leaders (Gladden, 2014) or look into emotions evoked by, benefits
of, and possible modes of robotic leadership (Samani & Cheok, 2011).
These studies present robotic leadership as a future phenomenon and, in
some cases, argue that it will emerge naturally (Gladden, 2014). Empirical
studies also introduce a scalable, generalizable mathematical framework to
model leader and follower behaviors in multiple-member HRTs and show
that this framework enables robots to influence human teams (Kwon et al.,
2019).
Finally, team perceptions might take the form of shared mental models,
which have been predicted (e.g., Nikolaidis & Shah, 2012) and studied to
determine their influence on team performance (Gervits et al., 2020), which
appears positive. In addition, a conceptual framework of in-group identifi-
cation, cohesion, and entitativity relies on parallels with dynamics in all-
human teams (Abrams & der Pütten, 2020). Several studies investigate robots
as in-group members of multiple-member or dyadic HRTs and identify
positive effects on robot acceptance and anthropomorphization (e.g., Eyssel &
Kuchenbrandt, 2012; Fraune et al., 2017). Another related topic pertains to the
parallels between HRTs and human-animal teams, such as USAR teams that
include rescue dogs (e.g., Phillips et al., 2016). Arguably, human-animal
teams might provide models for developing HRTs.
1692

Table 7. Empirical Studies on Multiple-Member HRTs Related to Inter-member Team Characteristics and Their Effects.
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Burke and Murphy Functional/↓ (Inuktun T n = 33 (two field Shared mental • Operator situational • Task performance • "a minimum 2:1 human-to-robot
(2004)/roles of Micro Variable studies)/ models, awareness (+) ratio is required for effective
humans and Geometry Tracked experienced situational • Goal-oriented team robot-assisted technical search
robots/VII Vehicle [n = 32], firefighters seeking awareness communication (+) in USAR” (p. 307)
Inuktun Microtracs USAR certification/ • Goal-oriented team
robot [n = 1])/ C communication and a shared
Physical robot mental model of the search space
and the task lead to better task
performance
Crandall et al. n.i./↓/n.i. T n_1 = 13, n_2 = 23/ n.i. • n.i. • n.i. • Proposal of performance
(2003)/ n.i./C (six 5-minute prediction algorithm for HRTs
autonomy and sessions each)
control/I
Dias et al. (2008)/ Functional (Pioneer, T n.i./n.i./C (15 minutes Sliding autonomy • Sliding autonomy • Performance (+) • Challenges of enabling sliding
autonomy and Segway ER1)/↓, run) methodology autonomy in HRTs can be
control/VII ↔/Physical robot overcome by the presence of six
key capabilities (requesting help,
maintaining coordination,
situational awareness, granularity,
prioritization, learning)
Fraune et al. Functional (Mugbot)/ T+S n = 48/21 f/C Group theory, • Group (ingroup, • Liking (higher for in-group • "participants favored the ingroup
(2017)/team ↔/Physical robot social identity outgroup) and humans in most over the outgroup, and humans
perceptions/IV competing teams • Agent (human, cases) over robots. Group had a greater
robot) • Anthropomorphism effect than Agent, so participants
(higher for ingroup in all preferred ingroup robots to
cases) outgroup humans.” (p. 1432)
Fuse and Tokumaru Humanoid T+S n = 14/Japanese n.i. • Presence of robot • Change in answers given (+ • "robots attempt to comply with
(2020)/roles of (RoBoHoN)/n.i./ university students/ considering group for change between round a group norm affects human’s
humans and Physical robot C (5 rounds) norms (vs. no robot) 1 and 2, for others n.s.) decision-making” (p. 56081)
robots/IV

(continued)
Group & Organization Management 48(6)
Table 7. (continued)
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Gervits et al. Humanoid (PR2 by T n = 26 (from 36 Shared mental • Robot shared mental • Performance (+) • Shared mental models help to
(2020)/team Willow Garage)/ originally recruited)/ models models improve performance and
perceptions/I ↔/Simulation/virtual 19 m; age: M = 24.9 efficiency of HRTs
robot years (SD = 8.6);
from University
campus/C
Wolf and Stock-Homburg

Gombolay, Functional/ T+S n = 24/14 m, 10 f; age: n.i. • Presence of robot • Team efficiency (+/+) • “an autonomous robot can
Gutierrez, et al. ↔,↑/Physical robot range 20–42 (mean • Robot decision- • Perceived likeability, outperform a human worker in the
(2015)/ age of 27±7 years); making authority appreciation, and allocation of part of or all tasks that
autonomy and recruited via email understanding of co- have to be completed” (p. 293)
control/II and around leader ( /+) • People prefer to give control
a university campus/ authority to the robot
C • "People value human teammates
more than robotic teammates,
however, providing robots
authority over team coordination
more strongly improves their
perceived value compared to
giving similar authority to
a human team mate” (p. 293)
• People tend to “assign
a disproportionate amount of
work to themselves when working
with a robot (. . .) rather than
human team mates only” (p.293)
Goodrich et al. n.i./↓/Simulation/virtual T n = 80 (in four n.i. • Attention • Individual and team • Individual and team autonomy
(2007)/ robot; physical experiments with management aids (+) autonomy benefit from adjustable and
autonomy and robot 16, 23, 11, 30 • Adaptive autonomy adaptive autonomy
control/I participants resp.)/ (+) • Adjusting autonomy should also
n.a./C • Information allow for shifting between
abstraction (+) management styles

(continued)
1693
Table 7. (continued)
1694

Author/ Robot Morphologyb/ Team Data Basis/


Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Kwon et al. n.i./↔, ↓, ↑/No T n.i./n.i./n.i. Adaptive • Robot intervention • Leadership scores (+) • Leader-follower graphs enable
(2019)/ embodiment leadership (use of leader- • Task execution time (+) robots to influence human teams
leadership/I theory follower graph) • Success rate (+) through “redirect[ion of]
a leader-follower relationship,
distract[ion of the] team, or lead
[ing of] a team towards the
optimal goal” (p. 2)
Lee et al. (2010)/ Functional (Pioneer P2- T n = 120 (in 60 teams)/ n.i. • Robot autonomy (+) • System performance • "Automating path planning
autonomy and AT robots)/ University of • Team organization improved system performance.
control/VII ↓/Simulation/virtual Pittsburgh (+/ ) Effects of team organization
robot (USARSim community, paid, were equivocal.” (p. 438)
robotic simulation) no previous
experience with
robot control/C
Lewis et al. (2010)/ Functional (Pioneer P2- T n = 120 (in 60 teams)/ n.i. • Robot autonomy (+) • System performance • Automation of path planning in
autonomy and AT)/↓/Simulation University of • Shared team USAR HRTs helps to improve
control/VII Pittsburgh authority (+) performance
community, paid, • "effects of team organization
no previous favored operator teams who
experience with shared authority for the pool of
robot control/C robots” (p. 1617)
Musić et al. (2019)/ Functional (KUKA T n = 48/12 f/C n.i. • Type of feedback (no • Task performance (n.s./+) • Proposal of control architecture
autonomy and LWR 4+)/↔, (experiment was vs. binary vs. for HRTs
control/V ↓/Physical robot performed 10 relative) • Feedback through wearable
times/participant) fingertip devices helps to
increase performance
Ranzato and Functional/↓/Physical T+S n = 30 (6 teams of 5 n.i. • Team organizational • Efficiency (+) • Loosely coupled teams were
Vertesi (2017)/ robot (remote!) each with 3:2 structure (loose) • Communication (+) found to be the most successful
roles of humans gender ratio)/n.i./C • Teammate trust (+) compared to tightly coupled
and robots/VII hierarchical and consensus
groups

(continued)
Group & Organization Management 48(6)
Table 7. (continued)
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Sellner et al. Functional (Roving Eye, T n_1 = 2, n_2 = 32/1: Situational • Autonomy • Time to completion ( ) • Robots purposefully asking for
(2006)/ Mobile Manipulator, expert users of the awareness, • Success rate (+) help result in more efficient and
autonomy and Crane)/↓/Physical robotic system; 2: concept of • Human workload ( ) robust systems and enable
control/I robot n.i./C sliding human operators to gain
autonomy • Autonomy • Average response time situational awareness
• Extent of ( /+)
Wolf and Stock-Homburg

information
provision
Strohkorb Sebo Humanoid (Jibo)/↔ T+S/ n = 78 (in 26 teams)/ Social identity • Specialized robot • Human inclusion • “specialized roles may hinder
et al. (2020)/ (not specified)/ 38 f; age: M = 16.82 theory liaison ( ) human team member inclusion,
roles of humans Physical robot years (SD = 0.72); • Robot supportive whereas supportive robot
and robots/I from high school utterances (+ /n.s.) utterances show promise in
program held at encouraging contributions from
Yale University/C individuals who feel excluded.”
(p. 309)

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.

f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: (-) = negative effect, (+) = positive effect, (n.s.) = not significant.
1695
1696 Group & Organization Management 48(6)

Disciplines, Study Characteristics, and Underlying Theories. Most of the 34 studies


(30 of multiple-member HRTs, 4 of dyadic HRTs) in this category are rooted
in HRI (9 studies) and USAR (8), along with cognitive science (6), military
(4), robotics (4), management (2), and space (1) research fields. Despite
finding a couple of studies that reflect a managerial perspective and many
studies with an HRI foundation, we note the considerable number of studies
with a USAR or military background. The 18 empirical studies are all cross-
sectional, online, or laboratory experiments, though one field study pertains to
USAR (Burke & Murphy, 2004). The team setups are mostly autonomous
mixed teams (6 studies) or human-directed robot teams (8); one study con-
siders a human/robot-directed mixed team with a human and robotic co-lead
and a human assistant (Gombolay, Gutierrez, et al., 2015).
In terms of theoretical foundations, several researchers employ in-group
identification theories (e.g., social identity theory [Tajfel, 1974]). Quite a few
studies use the theory of (shared) mental models (Rouse & Morris, 1986) and
further consider situational awareness (Endsley, 1995). Finally, some re-
searchers draw on sliding autonomy methodology (Sellner et al., 2006) for
HRTs.

Limitations. Some empirical studies rely on small sample sizes of less than 30
participants (e.g., Gombolay, Gutierrez, et al., 2015; Ranzato & Vertesi,
2017), and many feature quite young participants. The limitations of labo-
ratory and cross-sectional studies, as detailed for Category 1, also hold for the
studies in Category 2. We find slightly more variability in the considered team
setups, but further research could broaden the considered constellations. Most
studies in this category already leverage theoretical bases though.

Category 3: Effects of Team Processes


Focus Areas and Major Findings. Studies of team processes and their effects
account for most extant research on HRTs (see Tables 9–12), perhaps because,
unlike HRI or HRC, HRTs tend to be long-term in nature, so they require
careful consideration of relevant processes, which are at least partially unique
to each team (Abrams & der Pütten, 2020). Furthermore, HRTs have long been
popular, especially in military and USAR settings, which require sophisticated
coordination to fulfill their missions.
Researchers note some prerequisites of successful (physical) coordination
in HRTs (Woods et al., 2004). When studying HRTs, researchers draw heavily
on the concepts of coordination behaviors in all-human dyads that appear
promising (e.g., Bradshaw et al., 2009; Iqbal & Riek, 2017; Shah & Breazeal,
2010) or focus on indirect perceptions. Empirical studies of training (e.g.,
Table 8. Empirical Studies on Dyadic HRTs Related to Inter-Member Team Characteristics and Their Effects.
Robot
Morphologyb/
Author/ Robot Levelc,d/ Team Data Basis/
Subcategory/ Type of Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Eyssel and Humanoid/ T+S / n.i. n = 78/German Social • Robot in in- • Warmth (+) • Participants “rated the in-group
Wolf and Stock-Homburg

Kuchenbrandt ↔/Video/ university identity group (vs. • Mind attribution (+) robot more favourably . . .
(2012)/team image of robot students, 37 m, 40 theory out-group) • Psychological closeness [and] also anthropomorphized
perceptions/I f; age: M = 23.27 (+) it more strongly than the out-
(SD = 3.29)/C • Contact intentions (+) group robot” (p. 724)
• Design preference (+)
Kuchenbrandt Humanoid T+S / n.i. n = 45/25 m, 18 f, Social • Robot in in- • Implicit • "Perceived in-group
et al. (2013)/ (Nao)/n.i./ age: M = 24.81 identity group (vs. anthropomorphization of membership with the robot
team Physical robot years (SD = 5.00), out-group) robot (+) resulted in a greater extent of
perceptions/IV German university • Explicit anthropomorphic inferences
students/C anthropomorphization of about the robot and more
robot (+) positive evaluations.” (p. 409)
• Acceptance of robot (+) • Additionally, participants with
• General willingness to the robot in their in-group
interact with robot (+) “showed greater willingness to
interact with robots in
general.” (p. 409)

(continued)
1697
Table 8. (continued)
1698

Robot
Morphologyb/
Author/ Robot Levelc,d/ Team Data Basis/
Subcategory/ Type of Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Marble et al. Functional/ T+S n = 11/1 f, 10 m; 4 n.i. • Dynamic • Target detection (+) • Autonomy of a robot should be
(2004)/ ↓/Physical expert users, 7 no robot Situation awareness (+) adjustable to allow for
autonomy and robot or some prior autonomy situation awareness and task
control/VII experience; INEEL completion
employees/L (4 • Participants varied greatly in
sessions in direct their ability to trust a robot
succession) (i.e., allow autonomy)
• Performance benefits from
practice

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, □ = robot.
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1699

Nikolaidis et al., 2015; You & Robert, 2016), coordination strategies and
frameworks (Iqbal et al., 2016; Shah et al., 2011; H. Wang et al., 2010), and
automated cooperation (Gao et al., 2012; J. Wang et al., 2008) all note positive
effects, such as on team fluency, perceived robot trustworthiness, or team
performance in HRTs.
Research on communication in HRTs for multiple-member HRTs identifies
information flows supported by sensor networks (Kantor et al., 2006), uses of
“Human–Robot Interaction Operating Systems” (Fong et al., 2006), back-
channeling (Jung et al., 2013), real versus simulated videos (Canning et al.,
2014), and conflict moderation through robots (Jung et al., 2015). For dyadic
and multiple-member HRTs verbal versus non-verbal communication (e.g.,
Breazeal et al., 2005; Ciocirlan et al., 2019; Nikolaidis et al., 2018; Williams
et al., 2015), based on partner’s knowledge and behavior (Lo et al., 2020) was
examined. These studies mostly reveal positive effects of (extensive) com-
munication in HRTs (Tables 9–12). Another research pathway involves
communication interfaces (Marge et al., 2009), communication models (e.g.,
Kruijff, Janı́ček, et al., 2014; Nakano & Goodrich, 2015), and the design/
implementation of HRT for conversational HRI (Zheng et al., 2013).
Research into collaboration in HRTs refers to linkages explicitly estab-
lished in an HRT context, which should not be confused with the broader topic
of HRC. Conceptual studies range in focus, including the optimal setup of
“hybrid teams” with robots, virtual agents, and humans as team members
(Schwartz et al., 2016), collaboration challenges (Fiore et al., 2011), col-
laborative tools (Bruemmer & Walton, 2003), the development of collabo-
rative robotic teammates (Hayes & Scassellati, 2014), dynamic peer-to-peer
teaming (Tang & Parker, 2006), task-oriented collaboration with semantic-
based path planning (Yi & Goodrich, 2014), decision-making (Stewart et al.,
2012), and mutual initiatives (Bruemmer et al., 2002). Researchers also
examined collaboration frameworks (e.g., Hoffman & Breazeal, 2004; Marble
et al., 2003) for dyadic HRTs and a framework of joint action perception (Iqbal
et al., 2015) for multiple-member HRT. Finally, for trust, three studies of
dyadic HRTs discuss and examine the impact of appropriate trust (i.e.,
beneficial for team performance) (Chen et al., 2020; Ososky et al., 2013) and
its measurement (Freedy et al., 2007).

Disciplines, Study Characteristics, and Underlying Theories. More than one-third


of the 55 studies in this category (35 studies of multiple-member HRTs, 19 of
dyadic HRTs) are rooted in HRI (14 studies), cognitive science (12), or USAR
(11), followed by robotics (10), military (7), and space (1) research. Most of
the 32 empirical studies are cross-sectional or do not reveal the time frame for
their experiments. Burke and Murphy’s (2007) longitudinal study includes
Table 9. Conceptual Studies on Multiple-Member HRTs Related to Team Processes and Their Effects.
1700

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings

Alboul et al. (2008)/ n.i. / ↓ / Physical robot T n.i. • Proposal of theoretical framework for
(physical) navigation in HRTs
coordination/I
Bradshaw et al. n.i. / n.i. / n.i. T+S / n.i. Coordination • Coordination in human-agent-robot
(2009)/(physical) theory teams as an essential ingredient of
coordination/I joint activities: Fulfillment of
teamwork model and resulting
expectations towards
communication (towards leader and
colleagues) will allow robots to be
seen as team mates
Brown et al. (2005)/ n.i. / ↓ / n.i. T n.i. • Proposal of reference framework for
(physical) HRTs
coordination/III
Bruemmer et al. Functional (augmented T / n.i. Role theory, shared • Proposal of a framework for mutual-
(2002)/ ATRVJR)/↓, (↔, mental models initiative in HRTs
collaboration/III ↑)/Physical robot
Bruemmer and Functional (augmented T / n.i. Shared mental • Discussion of approach for control
Walton (2003)/ ATRVJR) / n.i. / n.i. models architecture for human–robot teams
collaboration/III in a military context
(continued)
Group & Organization Management 48(6)
Table 9. (continued)

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Fiore et al. (2011)/ n.i. / n.i. / n.i. T+S / n.i. n.i. • Successful interactions in HRTs are
collaboration/III based on organizational (and
corresponding roles), social, and
Wolf and Stock-Homburg

cultural models
• Research has to work on gaining
insights into how robots fit into such
models and how they can
understand organizational, social,
and cultural factors
Hayes and Scassellati n.i. / n.i. / n.i. T+S / n.i. n.i. • Proposal of four research questions
(2014)/ on collaboration in HRTs
collaboration/I
Kruijff, Janı́ček, et al. Functional (“Generaal”, T + S Situational • Proposal and validation of “user-
(2014)/ P3-AT; NIFTi UGV awareness centric design methodology in
communication/VII and UAV)/↓/Physical developing systems for human-robot
robot teaming in Urban Search and
Rescue” (p. 1)
• Robot acceptance is important
Kruijff et al. (2012)/ Functional/↓/Physical T Situational • Proposal of experience and
communication/III robot awareness communication model to support
shares human–robot activities
(continued)
1701
Table 9. (continued)
1702

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Kruijff-Korbayová Functional (NIFTi UGV T Situational • Description of the project “TRADR:
et al. (2015)/ and UAV)/↓/Physical awareness long-term human-robot teaming for
communication/IV robot robot assisted disaster response” (p.
193) and the user-centric design
approach that is used
Nakano and Goodrich n.i./n.i./n.i. n.i. / n.i. n.i. • Proposal of “new interface concept,
(2015)/ a Graphical Narrative Interface
communication/V (GNI)" (p. 634)
• "We hypothesize that the GNI allows
users to search and analyze
spatiotemporal information more
easily and quickly than a typical GUI.”
(p. 634)
Nourbakhsh et al. n.i./n.i./n.i. T / n.i. n.i. • Proposal of an agent-based
(2005)/ “architecture for Urban Search and
communicatin/VII Rescue and a methodology for
mixing real-world and simulation-
based testing” (p. 72)
Schwartz et al. (2016)/ Humanoid (Aila), n.i. / n.i. n.i. • Discussion of setup of teams with
collaboration/I functional (Artemis, robots, virtual agents and humans as
Compi)/n.i./n.i. team members (“hybrid teams")
Stewart et al. (2012)/ n.i./n.i./n.i. n.i. / n.i. Decision theory • Proposal of decision-making model
collaboration/IV for HRTs
Group & Organization Management 48(6)

(continued)
Table 9. (continued)

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Tang and Parker n.i./↔/n.i. T / n.i. Information • Proposal of human–robot teaming
(2006)/ invariance approach ASyMTRe, dealing “with
collaboration/I theory, schema the issue of how to organize robots
Wolf and Stock-Homburg

theory into subgroups to accomplish tasks


collectively based upon their
individual capabilities” (p. 27)
Woods et al. (2004)/ n.i./n.i./n.i. T / n.i. n.i. • Exploration of issues with human–
(physical) robot coordination
coordination/III
Yi and Goodrich n.i./↓/n.i. Shared mental • Proposal of collaboration model using
(2014)/ T models shared mental models
collaboration/V

Note. aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, □ = robot.
1703
1704 Group & Organization Management 48(6)

two runs over a 2-day period though. We find simulation studies (H. Wang
et al., 2010), laboratory studies (You & Robert, 2016), and field experiments
(Burke & Murphy, 2007). Studies of multiple-member HRTs mostly focus on
task interactions, and the setups include human-directed robot teams (14) or
autonomous mixed teams (9 studies) or else do not specify. With dyadic HRTs,
researchers examine autonomous human–robot pairings (12), human-directed
robots (4), or do not disclose their setups (4).
In this category, almost half of the studies report theoretical considerations.
These range from coordination theory (Malone & Crowston, 1990), to role
theory (Braga, 1972), and social signaling and back-channeling (Dennis &
Kinney, 1998). Again, multiple studies rely on shared mental models (Rouse
& Morris, 1986). Another popular theory is situational awareness, which is the
basis for various studies as detailed in Tables 9–12.

Limitations. The study samples tend to be small and young; and many studies
use laboratory settings to conduct cross-sectional experiments. Social robots
are underrepresented relative to functional robots, despite being developed
specifically to interact with people (Kirby et al., 2010), implying their par-
ticular suitability for HRTs. With regard to the theoretical basis, we see
potential for studies to strengthen the theoretical soundness of the examined
phenomena by integrating behavioral theories.

Category 4: Moderating Effects


Focus Areas and Major Findings. Moderator variables influence the strength of
the relationships between independent and dependent variables (Baron &
Kenny, 1986). These effects often stem from environmental or situational
factors (Baron & Kenny, 1986), prompting researchers to examine moderating
effects for multiple-member HRTs that reflect human capabilities (Claure
et al., 2020), curiosity and control (You & Robert, 2016), task complexity
(Jung et al., 2013), or number of sessions (Correia, Petisca, et al., 2019). For
dyadic HRTs, researchers also examine the effects of number of sessions
(Marble et al., 2004) and remote system experience (Marble et al., 2003). Most
of these proposed moderators appear to exert positive effects on the rela-
tionships of the studied independent and dependent variables, as indicated by
Tables 13 and 14. For example, curiosity positively moderates the effect
between training and individual performance (You & Robert, 2016), task
complexity positively influences the relationship between back-channeling
and team functioning (Jung et al., 2013), and target detection increases with
the number of sessions (Marble et al., 2004). Additional details on these
studies are available in the descriptions in their respective main categories.
Table 10. Conceptual Studies on Dyadic HRTs Related to Team Processes and Their Effects.
Author/Subcategory/ Robot Morphologyb/Robot Team Interactiond/ Underlying
Disciplinea Levelc/Type of Embodiment Team Setupe Theories Key Findings

Breazeal, Brooks, et al. Humanoid (Leonardo)/ T+S Collaborative • The authors follow a perspective “of a balanced partnership
(2004)/collaboration/I ↔/Physical robot discourse where the human and robot maintain and work together on
theory, joint shared task goals” (p. 270)
intention • Paper gives an overview of the different robotic features of the
Wolf and Stock-Homburg

theory robot
Breazeal, Hoffman, and Humanoid (Leonardo)/ T+S Collaborative • Presentation of approach for collaborative human–robot
Lockerd (2004)/ ↔/Physical robot discourse teamwork
collaboration/I theory, joint
intention
theory
Oh et al. (2015)/(physical) n.i./n.i./n.i. (no robot involved T+S / n.i. n.i. • Proposal and validation of model for indirect perception in HRTs
coordination/V in experiments)
Ososky et al. (2013)/trust/ n.i./↔/Physical robot T+S Shared mental • Trust in HRTs should not simply be maximized, the goal should be
IV models to have appropriate trust (both in intention and ability)
Shah and Breazeal (2010)/ n.i./n.i./n.i. (no robot involved T+S / n.i. Shared mental • Implicit and explicit communication in HHT give insights into how
(physical) coordination/ in experiments) models robots in HRTs could act
IV • "a robot should respond to communications differently,
depending on whether they are implicit, explicit, verbal only,
nonverbal only (gesture), or combined.” (p. 244)
Visser et al.(2020)/trust/IV n.i./↔/n.i. T+S / n.i. Theory of mind, • Proposal of human–robot team trust model that has a longitudinal
trust theories perspective on the development and calibration of trust in HRTs

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
1705

e
Team setup: s = human, □ = robot.
1706 Group & Organization Management 48(6)

Disciplines, Study Characteristics, and Underlying Theories. The 7 studies (4 of


multiple-member HRTs, 3 of dyadic HRTs) in this category come from USAR
(3 studies), cognitive science (2), robotics (1), and management (1) research.
Researchers use both functional and humanoid robots to conduct cross-
sectional (4 studies) online or laboratory experiments. Two studies reveal
longitudinal designs, with 3 or 4 sessions, respectively, but another does not
disclose its study setup. The teams represented by collaborative multiple-
member HRTs are autonomous mixed teams (3 studies) or a human-led robot
team (1 study). The dyadic HRTs involve human-directed robots engaged in
both task and social interaction (2 studies). Three studies (Claure et al., 2020;
Jung et al., 2013; Marble et al., 2004) adopt theoretical bases for their ex-
aminations, as detailed in the respective main categories.

Limitations. Relatively few studies in our sample consider moderating effects,


and only two of them are longitudinal. It is unlikely that “one-size-fits-all”
applies to HRTs (Stock, 2004), so moderators should be investigated further,
especially with long-term investigations of teams involving both humans and
robots.

Category 5: Integrative and Overarching Studies


Focus Areas and Major Findings. In this last category, we gather studies that
propose overarching frameworks, metrics, and HRT designs; publications that
address ethics in HRTs; and investigations of the inputs, processes, and outputs
of HRTs, with an integrative perspective (Tables 15–17). The overarching
frameworks, metrics, and HRT design studies include proposals of new metrics
and taxonomies, beyond existing ones that focus on HRI or HRC (Burke et al.,
2008; Pina et al., 2008). They feature components for evaluating team per-
formance (Pina et al., 2008; Visser et al., 2006). Ma et al. (2018) also consider
general design concepts. As an emerging topic, ethics in HRTs appears in
conceptual investigations of both dyadic and team interactions (Arnold &
Scheutz, 2017; Tamburrini, 2009). Finally, integrative studies of mediated
relationships in HRTs are relatively recent (Tables 15–17). Two studies deserve
particular consideration: Oleson et al. (2011) identify a few antecedents of trust
in HRTs (e.g., human, robot, and environmental characteristics). Then with an
input–mediator–output–input (IMOI) approach, an extension of the established
IPO framework for teams, You and Robert (2018b) offer a dynamic perspective
on HRTs that might inform studies of long-term HRTs.

Disciplines, Study Characteristics, and Underlying Theories. The 15 studies in this


category (13 of multiple-member HRTs, 2 of dyadic HRTs) are rooted in
Table 11. Empirical Studies on Multiple-Member HRTs Related to Team Processes and Their Effects.
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Burke and Murphy (2007)/ Functional (Inuktun T n = 62/90% m; majority Shared mental • Remote shared visual • Team performance • Remote shared visual presence
collaboration/VII Micro Variable between 35-54 years; models, presence (+) may help remote USAR HRTs
Geometry Tracked NASA USAR task-force situational • Visual contact (n.s.) “to perform as effectively as
Vehicle (VGTV) personnel/L (two runs á awareness collocated teams” (p. 161)
robot)/↓/Physical 20 minutes over 2-day
robot period; final n = 50 (#
teams completing both
Wolf and Stock-Homburg

runs))
Canning et al. (2014)/ Humanoid & T / 1: n_1= 24, n_2 = 137, n_3 n.i. • Video feed type (real • Task performance • Examination of robot perceptions
communication/I Functional (Xitone = 183/mTurk; 1: 12 f, vs. simulated) (n.s.) in remote team settings
Design MDS, age: range 18–31 years, • Perceived • "realism of the [video]feed
Willow Garage M = 20.88 (SD = 2.59), collaboration (+ for becomes important when the
PR2, VGo, iRobot 2 & 3: all right-handed, fluent real video) human teammate knows about
Create)/ in English; 2: 48 f, age: • Perceived utility (+ for the robot’s appearance and they
↓/Simulation/ range: 18–60 years, real video) work together on a task” (p.
virtual robot, median = 31, US • Video feed type (real • Task performance 4361)
Video/image of residents; 3: 91 f, age: vs. simulated) (n.s.) • See study for details on results
robot range: 18–60 years, • Introduction of • Perceived and interaction effects of study 3
median = 31, US robot collaboration (n.s.)
residents/C • Perceived utility (n.s.)
• Perceived competence
(n.s.)
• Perceived warmth
(n.s.)
Fong et al. (2006)/ Functional, humanoid T n.i./n.i./C n.i. • Reliability of robots • Productivity (amount • Software frameworks are being
communication/VI (K10 rover, (independence, as of useful work, developed (e.g., HRI/OS) to
Robonaut)/↓, a result of exposure time in allow for effective work of
↔/Physical robot understanding of space) (+) humans and robots
communication)
Gao et al. (2012)/(physical) n.i./↓/Simulation/ T n = 48/19 f; age: range n.i. • Team structure • Task performance • Automated search guidance
coordination/VII virtual robot 19–47 years, M = 26.6 (pooled, sector) (n.s.) neither increased nor
(USARsim) (SD = 5.5); 33 of them • Search guidance (no, • Task completion time decreased performance” (p. 81)
students/C suggestion, (- for suggested • Search guidance decreased
enforced) guidance in sector average task completion time in
teams; n.s. for other Sector teams” (p. 81)
conditions) • "pooled teams experienced
• Subjective workload (- lower subjective workload than
1707

for pooled teams) sector teams” (p. 81)

(continued)
Table 11. (continued)
1708

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Iqbal et al. (2015)/ Functional T n = 2/n.i./n.i. n.i. • n.i. • n.i. • Proposal of an event-based
collaboration/V (Turtlebot)/ model to enable robotic action
↔/Physical robot perception in HRTs
Iqbal et al. (2016)/(physical) Functional T / pilot: n_pilot = 7, n_main = 27 n.i. • Robot movement • Synchronization (+) • Proposal and validation of
coordination/V (turtlebot)/ (in 9 groups)/pilot: 3 f; based on • Robot Timing “approach to enable robots to
↔/Physical robot main: 14 f, age: M = synchronization- Appropriateness (+) perceive human group motion
main: 22.93 (SD = 3.98), index based in real time to anticipate future
mainly students/C anticipation (vs. actions and synthesize their own
based on event motion accordingly " (p. 909)
cluster-based 
• “the robot performs better
anticipation) when it has an understanding of
high-level group behavior than
when it does not” (p. 909)
Iqbal and Riek (2017)/ Functional T n= 18 (in 6 groups)/11 f; n.i. • n.i. • n.i. • "results might suggest that an
(physical) (turtlebot)/↔ age: M = 24.7 years addition of a robot with
coordination/V /Physical robot (SD = 4.5); undergrad heterogeneous behavior to
and grad students/C a group significantly reduces
the overall group coordination,
and might be an important
indicator of human-robot
group dynamics.” (p. 1716)
Jung et al. (2013)/ Humanoid (Maddox T+S n = 73/age: range 18–40 Back- • Back-channeling • Team functioning (+) • "subtle back-channeling by
communication/VII and Nexi), years (M = 25.0, SD = channeling, • Perceived robot robots in human–robot teams
functional (UAV, 6.19); from university social signaling engagement (+) helped team functioning (lower
not specified)/ community/C • Perceived robot stress, lower cognitive load) and
↔/Physical robot competence ( ) perceived engagement of the
robots, especially when the task
was complex, but at the same
time lead to robots being seen
as less competent.” (p. 1563)
• "the biggest benefits from back-
channeling in human–robot
teams may be seen when tasks
are demanding and complex.”
(p. 1563)

(continued)
Group & Organization Management 48(6)
Table 11. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Jung et al. (2015)/ Functional (Pioneer 3 T+S n = 106 (in 53 teams)/ n.i. • Robot intervention • Awareness of conflict • "we found that the robot’s
communication/IV robot base + OWI 55 m; age: range 18–65 (+) repair interventions increased
robot arm + arm- years (M = 24.5, SD = • Affect (+/n.s.) the groups’ awareness of
control board + 8.0); recruited from • Perceptions of team conflict after the occurrence
speaker)/ university/C members’ of a personal attack thereby
↔/Physical robot contributions (n.s.) acting against the groups’
• Team performance tendency to suppress the
Wolf and Stock-Homburg

(n.s.) conflict.” (p. 229)s


Kantor et al. (2006)/ Functional/↓/Physical T n.i./n.i./C n.i. • n.i. • n.i. • Sensor networks can be used by
communication/VII robot robots and humans to extend
their joint capabilities
Kruijff, Kruijff-Korbayová, Functional (NIFTi T n.i./n.i./n.i. Situational • n.i. • n.i. • Description of the experiences
et al. (2014)/ UGV and UAV)/ awareness in designing, developing and
communication/VII ↓/Physical robot deploying systems for USAR

Marge et al. (2009)/ Functional (Pioneer T+S n.i./n.i./n.i. n.i. • n.i. • n.i. • Description of the human–
communication/V P2-DX, Segway robot interface TeamTalk
Robotic Mobility
Platform (RMP))/
↔/Simulation/
virtual robot

Nevatia et al. (2008)// Functional/ T n.i./n.i./n.i. n.i. • n.i. • n.i. • Proposal and validation of an
collaboration/VII ↓/Simulation/ “integrated system for
virtual robot semiautonomous cooperative
exploration, augmented by an
intuitive user interface for
efficient human supervision
and control” (p. 2103)
• "having a human in the loop
improves task performance,
especially with larger numbers
of robots” (p.2103)
H. Wang et al. (2010)/ Functional (Pioneer T n = 60 participants Situational • Automated path • System performance • For USAR tasks, automated
(physical) coordination/ P2-AT)/ (acting in teams of 2 -> awareness planning (+) path planning helps to improve
VII ↓/Simulation/ 30 teams)/University of • Team organization team accuracy and
virtual robot Pittsburgh, paid, no (shared authority performance
1709

(USARSim robotic previous experience for robots) (+) • Sharing authority for robots
simulation) with robot control/C during team organization also
helps to improve performance
(re/accuracy and finding)

(continued)
Table 11. (continued)
1710

Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

J. Wang et al. (2008)/ Functional (P2DX T n = 19/age: range 19–33 Crandall’s • Needed physical • Team performance • "Automating cooperation [by
(physical) coordination/ robots, Zergs)/ years, from Pittsburgh neglect proximity ( ) using subteams] reduced CD
IV ↓/Simulation/ university/C tolerance • Coordination demands [coordination demands] and
virtual robot model, (n.s.) improved performance.” (p. 9)
situational
awareness • Automation of • Team performance (+)
cooperation • Coordination demands
(+)
Williams et al. (2015)/ Functional (VGo, T n_1 = 28, n_2 = 28/1&2: n.i. • Robot-robot • Perceived creepiness • "silent communication of task-
communication/IV Roompi)/ 14 f, age: range 18–65, communication of the robot (1: n.s.; 2: dependent, human-
↓/Physical robot mostly students/C (verbal, silent) + for silent understandable information
communication) among robots is perceived as
• Perceived creepy by cooperative, co-
trustworthiness of located human teammates” (p.
the robot (1 & 2: n.s.) 24)
• Perceived efficiency of • "increased natural language
the robot (1 & 2: n.s.) interaction with a robot
• Perceived enhances humans’ general
cooperativity of the perceptions of that robot” (p.
robot (1 & 2: n.s.) 38)
You and Robert (2016)/ Functional/humanoid T n = 60/36 f; age: M = n.i. • Training • Individual performance • "training minimized the negative
(physical) coordination/ (adapted from the 22.86 years (SD = (+/n.s.) impacts of curiosity and
®
IV LEGO 4.51); from university • Team performance heightened the positive
®
Mindstorms EV3 in US/C (+/n.s.) impacts of control on task
sets)/↓/Physical involving the use of a robot.”
robot (p.449)

(continued)
Group & Organization Management 48(6)
Table 11. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings

Zheng et al. (2013)/ Humanoid (Robovie- T n_customer = 15, n.i. • n.i. • n.i. • Introduction of simulation tool
communication/V II)/↔/customer & n_operator = 16; for “models for operation
operator: n.a.; n_simulation = 15; timing, customer satisfaction
simulation: n_case study=n.i./ and customer–robot
simulation/virtual customer: 8 f, age: M = interaction” (p. 843), and
robot; case study: 22 years; operator: 7 f, “techniques for managing
physical robot age: M = 21 years; interaction flow and operator
Wolf and Stock-Homburg

simulation: 6 f, age: M task assignment” (p. 843)


= 20 years; case study:
n.i.; all: Japanese
undergrad students/C

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, □= robot.
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1711
Table 12. Empirical Studies on Dyadic HRTs Related to Team Processes and Their Effects.
1712

Author/ Robot Morphologyb/ Team


Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings

Bozcuoglu et al. Functional T+S n.i./n.i./n.i. n.i. • n.i. • n.i. • Transparency on robotic behavior
(2015)/ (Quadcopter)/↓, and reactions through
communication/ ↔/Simulation/ communication helps to increase the
VII virtual robot success of HRTs
Breazeal et al. Humanoid T+S n = 21/10m; age: range 20–40; Shared mental • Non-verbal social cues and • Task performance • Non-verbal communication plays an
(2005)/ (“Leo(nardo)")/↓, local campus, no interaction models behavior (understandability of the important role also in the
communicatin/I ↔/Physical robot with robot before/C robot, efficiency of task effectiveness of HRTs
performance, robustness
to errors that arise from
miscommunication) (+)
Chen et al. (2020)/ Functional/`n.i./ T+S n_1 = 201 (simulation), n_2 = n.i. • Trust • Team performance (+/ ; • Proposal of computational model to
trust/V Simulation/virtual 20 (real robot)/1: age: range appropriate level of trust integrate trust into robotic behavior
robot, Physical 18–65 years, mTurk, from needed for best • "maximizing trust alone does not
robot the US, 2: age: range 21–65 performance) always lead to the best performance”
years, from University/C (p. 9:1)
Ciocirlan et al. Humanoid (TIAGo)/ T+ S / n.i. n = 71/40 m, 30 f; age: range Trust theories • Communication (no • Trust (+ for task • "the decrease in trust when the robot
(2019)/ n.i./Simulation/ 14–53 years, M = 24 years communication, text and communication) fails to perform the task is lower
communication/ virtual robot (SD = 6)/C verbal task when [there] is text and verbal
IV communication, text and interaction between the robot and
verbal informal the participant” (p. 7)
communication) • "Trust at the end of the experiment
was higher than the initial trust when
the participants had a text and verbal
interaction communication related
to the task” (p. 7)
Freedy et al. (2007)/ Functional (unmanned T+S n = 12/4 f; age: range 18–25 Collaborative • Robot competency • Time to complete mission • Introduction of an objective measure
trust/III ground vehicle)/ years, most with several performance ( ) of trust dependent on the number of
↓/Simulation/virtual years of gaming experience, model • Operator intervention ( ) operator overrides/interventions
robot 1.5 hours of training/L (15 • Workload ( ) • Knowledge about robot
trials/participant, 5 trials of 3 • Trust • Human intervention (-/+-; competencies and characteristics
competency levels in firing appropriate level of trust (e.g., level of performance) can help
behavior each) needed) to foster trust

(continued)
Group & Organization Management 48(6)
Table 12. (continued)
Author/ Robot Morphologyb/ Team
Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings

Hoffman and Humanoid (“Leo")/↓, T+S n.i./n.i./C Dialog theory, • n.i. • n.i. • Proposal of a framework for dynamic
Breazeal (2004)/ ↔/Physical robot joint intention collaboration
collaboration/I theory • To establish successful HRTs, robots
and humans have to share the same
goals, communicate with each other
and show commitment to jointly
reach their goals
Wolf and Stock-Homburg

Hoffman and Functional (Symon, T+S n = 32/15 f; MIT community, n.i. • Robot anticipatory action • Task efficiency (+/n.s.) • Anticipatory action of a robotic
Breazeal (2007)/ forklift-like)/ laboratory/C • Perceived robot teammate helps to increase task
collaboration/I ↔/Simulation/ contribution to team efficiency and improves “the
virtual robot fluency (+) perceived commitment of the robot
• Perceived robot to the team and its contribution to
contribution to team team’s fluency and success” (p. 1)
success (+)
• Perceived robot
commitment (+)
Koppula et al. Functional/↓/Physical T+S n = 5/n.i./n.i. n.i. • Anticipatory planning • Perceived robot • Proposal of graphical model to
(2016)/ robot (Kodiak collaboration (+) anticipate human actions
collaboration/I [PR2])/simulation/ • Perceived robot timing (+)
virtual robot • Satisfaction with robot (+)
• Willingness to work with
the robot (+)
• Time savings (+ /not stated
explicitly)
Lo et al. (2020)/ Functional/↔/Physical T+S n = 16/8 f, visitors or students n.i. • Robot motion planning • Perceived clarity of intent • Proposal of model for multi-agent
communication/V robot at the campus/C approach (nested (+) planning based on partner’s
inference for • Motion predictability and knowledge and behavior (NICA)
corroborative acts naturalness (+) • Experiment shows that NICA “is
(NICA) versus legible • Perceived social perceived as significantly more
motion) appropriateness (+) natural, socially appropriate, and
• Perceived safety, fluent to team with, while being both
intelligence, capabilities, more predictable and intent-clear”
thoughtfulness, and fluency (p. 326)
to team with of the robot
(n.s./+)

(continued)
1713
Table 12. (continued)
1714

Author/ Robot Morphologyb/ Team


Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings

Marble et al. (2003)/ Functional (ATRVJr)/ T+S n = 11/1 f, 10 m; 4 expert users, n.i. • Mixed-initiative • Adaptation to autonomy • Utilization of robot autonomous
collaboration/VII ↓/Physical robot 7 no or some prior interaction (not reported) capabilities depends on previous
experience; INEEL • Perceived ease to predict robotic experience of users
employees/C outcome of control (not (inexperienced users utilize
reported) autonomy more willingly)
• Control challenges should be
considered
Nikolaidis et al. Functional/ T+S n_1 = 36, n_2 = 24/1: recruited Shared mental • Team training (human–robot • Mental model • "cross-training yields statistically
(2015)/(physical) ↓/Simulation/virtual from MIT; 2: n.i./C models cross-training) convergence (+) significant improvements in
coordination/IV robot, Physical • robot trustworthiness quantitative team performance
robot (+) measures, as well as significant
• Team fluency (+) differences in perceived robot
performance and human trust”
• Team training (human–robot • Objective and subjective (p.1711)
cross-training) without measures of team • "This study supports the hypothesis
learning component in fluency and participant’s that the effective and fluent teaming
algorithm satisfaction (n.s. ofa human and a robot may best be
achieved by modeling known,
effective human teamwork
practices.” (p. 1711)
Nikolaidis and Shah Functional/↓, T+S n = 36/recruited from MIT/C Shared mental • Team training (human– • Mental model convergence • A good way to achieve effective and
(2013)/(physical) ↔/Physical robot models robot cross-training) (+) fluent human–robot teaming may be
coordination/IV • Mental model similarity (+) to model effective practices for
• Team fluency (concurrent human teamwork (p. 33)
motion, idle time) (+) • Human–robot cross-training leads to
• Perceived robot “statistically significant
performance (+) improvements in quantitative team
• Human trust (+) performance measures” (p. 33)
(compared to standard
reinforcement learning techniques)
Nikolaidis et al. Humanoid (HERB)/ T+S n_1 = 151 (from initial 200- Game theory • Robot communication • Trust in the robot (+ /n.s.) • "enabling the robot to issue verbal
(2018)/ ↔/Video/image of exclusions)/1: 60% female, • Adaption to robot (+ /n.s.) commands is the most effective form
communication/I a robot (video age: M = 35 years, from US, of communicating objectives, while
playback) mTurk/C retaining user trust in the robot.” (p.
22:1)

(continued)
Group & Organization Management 48(6)
Table 12. (continued)
Author/ Robot Morphologyb/ Team
Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings

Shah et al. (2011)/ Humanoid (Nexi, T+S n = 16 subjects/10 m; age: M = n.i. • Usage of robot plan • Human idle time ( ) • Chaski (task-level executive for
(physical) a Mobile- 29.4 years (SD = 16.1), execution system Chaski • Time to complete task (n.s.) robots) is able to reduce human idle
collaboration/I Dexterous-Social recruited from the MIT and • Robot trustworthiness (+) time significantly and by this
(MDS) robot)/ Greater Boston area/C • Team fluency (n.s.) supports the hypothesis that it can
↔/Physical robot • Perceived robot help to increase team performance
performance (n.s.)
• Sharing of common goals
Wolf and Stock-Homburg

(n.s.)

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, = robot.

f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1715
1716 Group & Organization Management 48(6)

cognitive science (6), HRI (5), ethics (2), management (1), and military (1)
research. All 9 empirical studies are cross-sectional, but they feature both
laboratory and field experiments. Teams are complex though, so the com-
paratively few studies that take an integrative perspective seems surprising.
These studies include both functional and humanoid robots, such as those
adapted from Lego® Mindstorms® sets (You & Robert, 2018b; 2019a;
2019b), but not any social robots. Multiple-member HRTs all reflect human-
directed robot teams that are, in most cases, collaborative. Studies of dyadic
HRTs also mostly consider dyadic collaborative teams and include human-
directed robots or an autonomous human–robot pairing.
About half of the studies in this category specify their theoretical foun-
dation and build, for example, on motivational theories of individual and team
motivation (Kanfer et al., 2008). Other theories include media synchronicity (Dennis
et al., 2008); the technology acceptance model (Davis, 1986) and the unified model
of technology acceptance and use of technology (Venkatesh et al., 2003); social
identity theory; the IPO model; notions of trust in relation to teamwork (Zaheer et al.,
1998), technology (McKnight et al., 2011), and robots (Yagoda & Gillan, 2012); and
social categorization and attraction theories (Hogg & Turner, 1985).

Limitations. We note several limitations pertaining to integrative, overarching


studies of HRTs, beginning with a lack of consideration of social robots, which
are especially relevant to HRTs and integrative investigations of them. An-
other important consideration is the time frame; many team processes only
develop over time and should be examined with a dynamic approach to gain
more insights. Finally, HRT designs other than human-directed robot teams
need to be examined from an integrative perspective.

Discussion
Summary of Findings of Existing Research
Despite vastly different definitions of HRTs and distinct research foci, re-
searchers from multiple disciplines all pursue insights into their aspects and
related processes. In Figure 5 we summarize the main categories and sub-
categories linked to the IPO model of teams. Because so few studies examine
moderating effects, we cannot identify further subcategories. In addition, we
find that extant research exhibits a dominant focus on HRT inputs and
processes, so we do not elaborate further on the subcategories of team outputs.
Intra-member team characteristics are considered less frequently than
other topics and primarily in relation to team setups and processes, probably
due to their interdependencies with HRI and HRC. Nonetheless, research on
Wolf and Stock-Homburg 1717

Figure 5. Overview of main categories and subcategories examined in the IPO


model of teams.

robot behavior is rooted in a HRT context and reveals that positive robot
behaviors and transparency exert positive effects on team processes and
outcomes. In studies that examine both physical and behavioral robotic
characteristics, we also find important hints for research directions, especially
in terms of a holistic robot design. Human preferences and behaviors are
equally interesting topics to include in efforts to understand HRTs fully.
Inter-member team characteristics have been examined more extensively;
autonomy, control, and leadership are included in many studies. Vastly dif-
ferent definitions of HRTs, across a variety of team setups (e.g., leadership),
affirm the logic of this central focus. Yet we also note that all empirical studies
on (sliding) autonomy and control in HRTs indicate that (partially) autono-
mous robots and shared control can facilitate the work of human team
members and make HRTs more efficient.
Compared with individual team members and team characteristics, team
processes in HRTs and their effects have been investigated very intensively.
Physical coordination has long been a topic, primarily with a focus on robotics
aspects and the development of coordination concepts, but collaboration in
HRTs has come to the attention of researchers only more recently. Here,
interesting parallels are being drawn between HRTs and all-human teams with
regard to the benefits of coordination or communication mechanisms. In
general, studies indicate that well-choreographed coordination and commu-
nication efforts are key success factors for HRTs.
1718 Group & Organization Management 48(6)

The presence of moderating effects in HRTs is coming more into focus, and
it remains an important consideration because moderator variables exert
effects on the relationships of team inputs, processes, and outputs. There is not
one size that fits all HRTs, so further investigations should seek insights into
relevant moderators and their effects.
Finally, integrative and overarching studies are lacking, despite their
importance for gaining a holistic, deep understanding of the mechanisms in
HRTs. Here, we note that HRTs are complex systems that require intensive
research, and using insights from all-human team research could help clarify
them, especially in real-world settings. For example, You and Robert (2018b)
discuss a loop in the IPO model that conceptually may be plausible for HRTs.

Summary of Limitations of Existing Research


Overall, HRTs have been widely addressed by research, yet this domain still
has a way to go to establish what constitutes successful and sustainable HRTs
for society and business. Therefore, along with the key insights, we delineate
three overarching limitations of extant research on HRTs. First, the cognitive
sciences domain is emerging, but most research comes from USAR, space
exploration, or robotics efforts, involving mainly functional robots. This
single-sided view on HRTs must be broadened to encompass managerial and
cognitive perspectives too. Second, shared mental models and social identity
theory offer good starting points, but opportunities for applying behavioral
theories, as have been addressed by research into all-human teams, are vast. Third,
because it often features student samples, small samples, laboratory studies, and
cross-sectional examinations, extant HRT research leaves some considerable gaps
that point to a research agenda, as we discuss in the next section.

Limitations of this Review


This literature review has a number of limitations. Foremost, there may be
relevant publications that were not included in this review despite a thorough
literature search and efforts to avoid selection bias. We also focused on
publications in English to be included in our review. With this review, we
focus on robots as team members, but we openly acknowledge the other forms
of human–technology teams, beyond HRTs, such as teams with virtual as-
sistants. These interactions might be useful for HRTs too. To the best of our
knowledge, no studies address teams with non-robotic but artificial team
members in a business context. Therefore, another review might provide an
overview of non-robotic artificial team members (e.g., virtual assistants) and
compare the insights with our findings related to research into robotic team
Table 13. Studies Related to Moderating Effects in Multiple-Member HRTs.

Author/Disciplinea/ Moderator
Main Category Independent Variable(s)b Dependent Variable(s)b Variable(s) Moderating Effectb
Claure et al. (2020)/V/ • Robot fairness • User trust • Human • (+ for weak performers;
Cat. 1 • Perceived robot capabilities n.s. otherwise)
Wolf and Stock-Homburg

fairness
Correia, Petisca, et al. • Robot goal orientation • Competitiveness Index • Session • Mixed results, see study
(2019)/IV/Cat. 1 (performance-driven vs. learning- • McGill Friendship number for details
driven) Questionnaire
• Relationship
Assessment Scale
• Godspeed
Questionnaire
Jung et al. (2013)/VII/ • Back-channeling • Team functioning • Task • (+)
Cat. 3 • Perceived robot complexity • (+)
engagement • (n.s.)
• Perceived robot
competence
You and Robert (2016)/ • Training • Individual performance • Curiosity • (+ /n.s.)
IV/Cat. 3 • Team performance • Control • (n.s./-)

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1719
1720

Table 14. Empirical Studies Related to Moderating Effects in Dyadic HRTs.

Author/Disciplinea/Main Independent Moderating


Category Variable(s)b Dependent Variable(s)b Moderator Variable(s) Effectb

Marble et al. (2004)/VII/Cat. 2 • Dynamic robot • Target detection • Session number • (+)
autonomy • Situation awareness • (+)

Marble et al. (2003)/VII/Cat. 3 • Mixed-initiative • Adaptation to autonomy • Remote system • n.s


interaction • Perceived ease to predict experience •( )
outcome of control

Richert et al. (2016)/II/Cat. 1 • Personal characteristics • Task performance • Subjective behavior • Not reported
• Robot characteristics o Stress
o Cooperation

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1721

members. We also recognize the common risk of a publication bias for our
study (Jager et al., 2020). A publication bias largely occurs before and during
scientific review processes, leaving us with limited possibilities to overcome it
completely (information on our efforts to address potential biases can be found
in Supplementary Appendix A).

Future Research Agenda


Beyond these considerations pertaining to our review, we note some un-
explored areas, both conceptual and empirical, that highlight the vast op-
portunities for learning more about the design, theoretical concepts, and
practical implications of HRTs. We structure those opportunities into two
broad categories: How can robots be team members, and when?

How can robots be team members?. It would greatly advance the field if re-
search were to explain the mechanisms that underlie interaction in HRTs, based
on behavioral theories. Currently, no overall theory exists for HRTs, which leads
to unsteady theoretical foundations. Moreover, most studies do not offer solid
theoretical justification for their predictions (see, e.g., summary of limitations).
An approach already being used by some researchers relies on investigations of
all-human teams as bases for HRT research, which ensures a more theory-driven
effort (Krämer et al., 2012). In addition to social identity theory (Tajfel, 1974),
shared mental models, and gender studies, leader–member exchange theory as
applied to all-human teams (van Breukelen et al., 2006) might be a suitable
theoretical basis for research on HRTs with social robots in particular. Another
valuable effort might seek insights into how individuals, companies, and society
can prepare for HRTs. Since all studies we found during our review are focusing
on existing HRTs (see Tables 2–17), many open questions remain regarding how
to prepare for HRTs. Researchers have a broader responsibility than core HRT
topics; in particular, they should address the transition toward HRTs and how
individuals, companies, and society can engage beneficially in it.

When can robots be team members?. To address this broad question, we


recommend research that undertakes two main comparisons, as well as two
examinations. First, we call researchers to compare different types of HRTs in
organizations. Traditional team research distinguishes permanent versus
project-based teams, top management versus work teams, and so forth (for an
overview, see Hollenbeck et al., 2012). Interaction modes, processes, and
outcomes likely differ across these teams (Hollenbeck et al., 2012; LePine
et al., 2008). But according to our review, different types of HRTs tend to be
studied in isolation (see Table 1), rather than compared in terms of similarities
1722

Table 15. Conceptual Integrative and Overarching Studies on Multiple-Member HRTs.


Author/Subcategory/
Discipline2)/Team Interactionb Underlying Theories Research Frameworkc Key Findingsd

Arnold and Scheutz (2017)/ n.i. n.i. • There are many ethical questions currently unsolved in HRI
ethics/VIII/T+S • "Robots do not have to be teammates to work with a team, especially given the ethical and empirical
question of how the whole range of physical presence with a robot can affect others.” (p. 449)
Ma et al. (2018)/HRT design/I/ n.i. n.i. • Overview of important considerations for the design of HRTs, including team and teamwork
T+S components
Oleson et al. (2011)/ n.i. • Inappropriate levels of trust can lead to disuse and/or misuse of robots
integrative study/IV/T+S • Proposal of a framework for human–robot trust

Robert (2018)/integrative Motivational theories of n.i. • Proposal of “Motivational Theory of Human–Robot Teamwork” based on: emotional stability,
study/IV/T+S individual and team extraversion, openness to experience, agreeableness, conscientiousness of a robot
motivation
Tamburrini (2009)/ethics/VIII/ n.i. n.i. • Robot ethics is a growing field that gains importance with the developments of new robots and
T+S technology
You and Robert (2018b)/ IPO model, trust theories n.i. • Proposal of working framework for HRTs based on IMOI (inputs-mediators-outputs-inputs)
integrative study/I/T+S framework

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics; studies
are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Team interaction: T = task interaction, T+S = task & social interaction.
c
Effect: = positive effect, = negative effect, = not significant, = effect not reported.
d
None of the studies provide information on robot morphology, robot level, or type of embodiment. Only two studies provide information on team setup,
focusing on autonomous mixed teams (Ma et al., 2018) and human-directed robot teams (You & Robert, 2018b), respectively.
Group & Organization Management 48(6)
Table 16. Empirical Integrative and Overarching Studies on Multiple-Member HRTs.
Robot
Author/ Morphologyb/
Subcategory/ Robot Levelc/Type Team Interactiond/Team Data Basis/Participantsf/
Disciplineb of Embodiment Setupe Time Frameg Underlying Theories Research Frameworkh Key Findings

Burke et al. Functional T n = 31/participants from n.i. n.i. • Proposal and validation of measurement
(2008)/ (Telemax UGV, FEMA USAR teams in instruments for assessment of usability (team
Metrics/I Matilda UGV, the US/C (in 2 phases) member), incidents (observer) and team
Dragonrunner processes (observer) in HRTs
UV, AirRobot
Wolf and Stock-Homburg

UAV)/↓/Physical
robot
Giachetti et al. n.i./n.i./Simulation/ T/Combinations of n.i./n.i./n.i. Shared mental models • Number of robots (2,4) • Performance
(2013)/ virtual robot n_robot={2,4} & • Team size (6,12) • Effectiveness (see key findings and study for detailed • Proposal and validation of agent-based
integrative n_team={6,12} • Team centralization (low, high) results and interaction effects) simulation model for the examination of team
study/III • Danger level (30%, 70%) designs
• Robot reliability (6, 10 hours) • "there are limits to the number of robots that
a team can effectively manage” (p. 25)
• "larger teams have more robust performance
over the noise [i.e., not controllable] factors”
(p. 15)
• "robot reliability is critical to the formation of
human-robot teams” (p. 15)
• "high centralization of decision-making
authority created communication
bottlenecks at the commander in large
teams” (p. 15)
Pina et al. n.i./↓/n.i. T+S n = 16/age: range 19–49 n.i. n.i. • Proposal of generalizable metric classes for the
(2008)/ years/C (four 8-minute evaluation of HRTs and illustration of need
metrics/I sessions with different for these with case study
robotic team sizes)

Robert and Functional/ T+S n = 30 (15 teams)/14 f; age: n.i. n.i. • "subgroups formed between humans and their
You (2015)/ humanoid M = 24.7 (SD = 7.48); robots were negatively correlated with
integrative (adapted from from large university in various team outcomes” (p. 1)
®
study/IV the LEGO US/C (laboratory)
®
Mindstorms
EV3 sets)/
↓/Physical robot
1723

(continued)
Table 16. (continued)
1724

Robot
Author/ Morphologyb/
Subcategory/ Robot Levelc/Type Team Interactiond/Team Data Basis/Participantsf/
Disciplineb of Embodiment Setupe Time Frameg Underlying Theories Research Frameworkh Key Findings

You and Functional/ T+S n = 114 (in 57 teams)/ Media richness (channel • Emotional attachment of teams to robots leads
Robert humanoid 51 m; age: M = 23 years expansion theory, cognitive to better performance
(2017)/ (adapted from (SD = 5.3); from online model of media choice, media • "Both robot and team identification increased
®
integrative the LEGO subject pool at synchronicity), technology a team’s emotional attachment to its robots”
®
study/IV Mindstorms a Midwestern university acceptance model, unified (p. 377)
EV3 sets)/ in US/C (duration with model of technology
↓/Physical robot robots approx. 25–30 acceptance and use of
minutes) (between- technology, social identity
subjects) theory
You and Functional/ T+S n = 108 (54 teams)/54 Social categorization and • "robot identification increased trust in robots
Robert humanoid men; age: M = 24 years; attraction theories, trust and team identification increases trust in
(2019b)/ (adapted from from subject pool at theories one’s teammates” (p. 244)
®
integrative the LEGO a Midwestern university • "Trust in robots increases team performance
®
study/IV Mindstorms in US/C (duration while trust in teammates increases
EV3 sets)/ approx. 25–30 minutes) satisfaction” (p. 244)
↓/Physical robot

You and Functional/ T+S n = 88 (44 teams)/42 f; age: Social identity theory, trust • Subgroups can form in HRTs (when humans
Robert humanoid M = 23.6 (SD = 4.1); theories identify with their robots)
(2019a)/ (adapted from from large university in • "Robot identification and team identification
®
integrative the LEGO US/C (duration approx. moderate . . . negative effects of subgroup
®
study/IV Mindstorms 25–30 minutes) formation on teamwork quality and
EV3 sets)/ subsequent team performance” (p. 1)
↓/Physical robot

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics; studies are
categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot. □
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Group & Organization Management 48(6)

Effect: = positive effect, = negative effect, = not significant, = effect not reported.
Table 17. Empirical Integrative and Overarching Studies on Dyadic HRTs.
Robot
Morphologyb/
Author/ Robot Levelc/ Team
Subcategory/ Type of Interactiond/ Data Basis/Participantsf/
Disciplinea Embodiment Team Setupe Time Frameg Research Frameworkh Key Findingsi

Visser et al. n.i./↓/n.i. T+S n = 12/4 f, age: range 18– n.i. • Proposal and validation of
(2006)/ 25 years/C (3x5x6 measurement
metrics/I/III mixed factorial design (2 methodology for team
within, 1 between)) performance of HRTs
You and Functional T+S n = 200/77 m, age: range • Human–robot (work-
Robert (PR2)/ 18–68 years (M= 36.5, style) similarity helps to
Wolf and Stock-Homburg

(2018a)/ ↓/Image/ SD= 10.77), mTurk, US/ increase trust in a robot,


integrative video of C leading to willingness to
study/II robot work with robots and
ultimately to preference
for robotic co-worker
rather than human co-
worker

Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.

f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
1725

h
Effect: = positive effect, = negative effect, = not significant, = effect not reported.
i
None of the studies indicated underlying theories.
1726 Group & Organization Management 48(6)

and differences. Insights along these lines could improve the management of
HRTs in organizations and support human teammates. Second, research
comparisons might address different application scenarios of HRTs in or-
ganizations. Most HRT research addresses specific application scenarios, such
as rescue robots in USAR (Kruijff-Korbayová et al., 2015) or robots working
on the International Space Station (Fong et al., 2005). Insights on HRTs in
organizations in an office environment are still scarce (see disciplines of
studies in different categories). With an online survey, we learned that ac-
ceptance of robots in work-related HRTs has increased, especially during the
COVID-19 pandemic. Results suggest four potential roles for robots in
(Figure 6): (1) robotic team assistant supporting administrative and co-
ordination work, (2) robotic knowledge expert providing expertise in a specific
field, (3) robotic scrum master (Scrum Alliance, 2021) working with the team
and ensuring that the team lives up to agile values and principles, such as
through coaching, and (4) robotic team leader with institutionalized authority
over other team members. Third, researchers should examine HRTs in real-life
settings. The studies we reviewed are overwhelmingly conceptual or cross-
sectional laboratory studies (see study characteristics in different categories),
with limited capacity to transfer the findings to real-life settings (Levitt & List,
2005). Especially, noting current developments in the world economy and the
increasing relevance of robots in everyday contexts, continued research

Figure 6. Potential roles of robots in HRTs. Note: Sources for icons: top left icon:
Scrum by Sharon Showalker; top right icon: leader by Oksana Latysheva; bottom right
icon: to do list by ArtWorkLeaf; bottom left icon: Brain by Alla Zeluska, all from
thenounproject.com
Wolf and Stock-Homburg 1727

should examine HRTs in real-life settings. Fourth, we hope more studies


examine the long-term effects of HRTs. Cross-sectional studies (see study
characteristics in different categories) cannot accurately depict longer-term
relationships among team members, so continued studies should seek to grasp
all consequences of implementation efforts for HRTs. To do so, appropriate
methods for the long-term investigation of HRTs should be developed.

Conclusion
Human–robot teams are an emerging phenomenon and part of the future of
work and society. Yet extant research lacks some important insights. With this
review, we establish some unexplored research areas, many of which pertain
to real-life, long-term HRT deployment considerations. We offer six prop-
ositions for continued research, reflecting the strong relevance of the topic and
considering current developments in the world economy. We hope this review
provides inspiration for ongoing HRT studies.

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research project is funded by the
German Federal Ministry of Education and Research (BMBF) within the KompAKI
project. The authors are responsible for the content of this publication.

Supplemental Material
Supplemental material for this article is available online.

ORCID iD
Franziska Doris Wolf  https://orcid.org/0000-0002-7125-7597

Notes

1. The survey participants were recruited via Amazon Mechanical Turk. We sought
business leaders; they had an average of 6.99 (SD = 6.431) years of leadership ex-
perience in various industries, including IT (23.2%), banking/insurance (13.8%), and
health care/social sectors (9.9%). These leaders were responsible for teams (45.3%),
departments (29.9%), business areas (11.1%), or the whole company (13.8%). The
1728 Group & Organization Management 48(6)

survey introduced social robots and their potential roles in organizations and issued the
prompt “I can imagine having a robot as an assistant/colleague/supervisor,” which
participants answered on a 5-point Likert scale (1 = “not at all,” 5 = “absolutely”).

2. Our focus explicitly is not on robot–robot teams, human–computer, or human–


machine interactions. Disembodied agents limit communication channels, compared
with embodied agents, Deng et al. (2019), which in turn can limit the generalizability of
findings. Furthermore, detailed considerations of the roles of agents in teams extend
beyond the scope of this review.

3. In line with Breazeal (2003) and Fong et al. (2003), the low level of social interaction
(see Figure 2) includes so-called “socially evocative” (Breazeal 2003, p. 169) robots
that elicit social responses from humans without responding socially to them.

4. Using our proposed definition, we can distinguish HRTs from related concepts, such
as human–robot interaction (HRI) or human–robot collaboration (HRC). In particular,
HRI is “the study of the humans, robots, and the ways they influence each other” Fong,
Thorpe, and Baur (2001, p. 257), and HRC implies humans and robots “working jointly
with others or together especially in an intellectual endeavor” Green, Billinghurst,
Chen, and Chase (2008, p. 1). Similar to HRTs, the involved parties (robots and
humans) interact, such as by expressing or responding to emotions Kreijns et al.,
(2003). Yet HRC and HRTs are narrower than HRI, in that they pursue the achievement
of joint goals (Bradshaw et al. (2009); Marge et al. (2009); You and Robert (2018b)).
Uniquely in HRTs, team members work both interdependently and together (Bradshaw
et al. (2009); Ma et al. (2018)).

References
Abrams, A. M. H., & der Pütten, A. M. R. (2020). I–C–E framework: Concepts for
group dynamics research in human-robot interaction. International Journal of
Social Robotics, 12(6), 1213-1229. https://doi.org/10.1007/s12369-020-00642-z.
ACM (2007). Proceedings of the 2007 ACE/IEEE Conference on Human-Robot
Interaction: Robot as Team Member. Association for Computing Machinery,
New York, NY, USA.
Adams, J. S. (1963). Toward an understanding in inequity. Journal of Abnormal
Psychology, 67, 422-436. https://doi.org/10.1037/h0040968.
Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed), Advances in
experimental social psychology (2nd ed., pp. 267-299). Elsevier, Burlington.
https://doi.org/10.1016/s0065-2601(08)60108-2.
Alboul, L., Saez-Pons, J., & Penders, J. (2008). Mixed human-robot team navigation in
the GUARDIANS project. In SSRR 2008: IEEE International Workshop on
Safety, Security and Rescue Robotics (pp. 95-101). IEEE Xplore. https://doi.org/
10.1109/SSRR.2008.4745884.
Ambrose, R. O., Aldridge, H., Askew, R. S., Burridge, R. R., Bluethmann, W., Diftler,
M., Lovchik, C., Magruder, D., & Rehnmark, F. (2000). Robonaut: NASA’s space
Wolf and Stock-Homburg 1729

humanoid. IEEE Intelligent Systems and Their Applications, 15(4), 57-63. https://
doi.org/10.1109/5254.867913.
Arnold, T., & Scheutz, M. (2017). Beyond moral dilemmas: Exploring the ethical
landscape in HRI. In 12th ACM/IEEE International Conference on Human-Robot
Interaction (HRI) (pp. 445-452). IEEE. https://doi.org/10.1145/2909824.3020255
Arnold, T., & Scheutz, M. (2018). Observing robot touch in context: How does touch
and attitude affect perceptions of a robot’s social qualities? In HRI ’18, Pro-
ceedings of the 2018 ACM/IEEE International Conference on Human-Robot
Interaction. ACM. https://doi.org/10.1145/3171221.3171263.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in
social psychological research: Conceptual, strategic, and statistical considerations.
Journal of Personality and Social Psychology, 51(6), 1173-1182. https://doi.org/
10.1037//0022-3514.51.6.1173.
Barrick, M. R., Stewart, G. L., Neubert, M. J., & Mount, M. K. (1998). Relating
member ability and personality to work-team processes and team effectiveness.
Journal of Applied Psychology, 83(3), 377-391. https://doi.org/10.1037/0021-
9010.83.3.377.
Bartneck, C., Reichenbach, J., & Carpenter, J. (2006). Use of Praise and Punishment in
Human-Robot Collaborative Teams. In ROMAN 2006 - The 15th IEEE In-
ternational Symposium on Robot and Human Interactive Communication (pp.
177–182). IEEE. https://doi.org/10.1109/ROMAN.2006.314414
Bell, S. T., & Marentette, B. J. (2011). Team viability for long-term and ongoing
organizational teams. Organizational Psychology Review, 1(4), 275-292. https://
doi.org/10.1177/2041386611405876.
Bluethmann, W., Ambrose, R., Diftler, M., Askew, S., Huber, E., Goza, M., Rehnmark,
F., Lovchik, C., & Magruder, D. (2003). Robonaut: a robot designed to work with
humans in space. Autonomous Robots, 14(2-3), 179-197. https://doi.org/10.1023/
a:1022231703061.
Bozcuoglu, A. K., Yazdani, F., Beßler, D., Togorean, & Beetz, M. (2015). Reasoning
on communication between agents in a human-robot rescue team. In A. Aly,
S. Griffiths, F. Stramandinoli, & Chairs (Eds), In Towards Intelligent Social
Robots: Current Advances in Cognitive Robotics: Workshop in Conjunction with
Humanoids 2015. Seoul, South Korea.
Bradshaw, J. M., Dignum, V., Jonker, C., & Sierhuis, M. (2012). Human-agent-robot
teamwork. IEEE Intelligent Systems, 27(2), 8-13. https://doi.org/10.1109/mis.2012.37.
Bradshaw, J. M., Feltovich, P., Johnson, M., Breedy, M., Bunch, L., Eskridge, T., Jung,
H., Lott, J., Uszok, A., & van Diggelen, J. (2009). From tools to teammates: Joint
activity in human-agent-robot teams. In M. Kurosu (Ed), Human Centered Design
(pp. 935-944). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-
02806-9_107.
Braga, J. L. (1972). Role theory, cognitive dissonance theory, and the interdisciplinary
team. Interchange, 3(4), 69-78. https://doi.org/10.1007/BF02145409.
Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3-
4), 167-175. https://doi.org/10.1016/s0921-8890(02)00373-1.
1730 Group & Organization Management 48(6)

Breazeal, C., Brooks, A., Chilongo, D., Gray, J., Hoffman, G., Kidd, C. D., Lee, H.,
Lieberman, J., & Lockerd, A. (2004). Working collaboratively with humanoid
robots. In 4th IEEE/RAS International Conference on Humanoid Robots, 2004.
IEEE. Santa Monica, CA, USA. https://doi.org/10.1109/ICHR.2004.1442126
Breazeal, C., Hoffman, G., & Lockerd, A. (2004). Teaching and working with robots as
a collaboration. In N. Jennings (Ed), Proceedings of the third International joint
conference on autonomous agents & multiagent systems (3, pp. 1030-1037). As-
sociation for Computing Machinery, USA. https://doi.org/10.1109/AAMAS.2004.
242646.
Breazeal, C., Kidd, C. D., Lockerd Thomaz, A., Hoffman, G., & Berlin, M. (2005).
Effects of nonverbal communication on efficiency and robustness in human-robot
teamwork. IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS). IEEE. Edmonton, Cananda. https://doi.org/10.1109/iros.2005.1545011.
Brown, S., Toh, J., & Sukkarieh, S. (2005). A flexible human-robot team framework
for information gathering missions. In C. Sammut (Ed), Proceedings of the 2005
Australasian Conference on Robotics & Automation. Australian Robotics and
Automation Association Inc. Sydney.
Bruemmer, D. J., Marble, J. L., & Dudenhoeffer, D. D. (2002). Mutual initiative in
human-machine teams. In J. J. Persensky, B. Hallbert, & H. Blackman (Chairs
(Eds), New century, new trends: Proceedings of the IEEE 7th Conference on
Human Factors and Power Plants. IEEE. https://doi.org/10.1109/HFPP.2002.
1042863.
Bruemmer, D. J., & Walton, M. C. (2003). Collaborative tools for mixed teams of
humans and robots [Conference presentation]. International Workshop on Multi-
Robot Systems. Washington, DC, USA.
Burke, J. L., & Murphy, R. R. (2004). Human-robot interaction in USAR technical search:
two heads are better than one. In RO-MAN 2004. 13th IEEE International Workshop
on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759) (pp.
307–312). IEEE. https://doi.org/10.1109/ROMAN.2004.1374778.
Burke, J. L., & Murphy, R. R. (2007). RSVP: An investigation of remote shared visual
presence as common ground for human-robot teams. In Proceedings of the 2007
ACM/IEEE Conference on Human-Robot Interaction: Robot as team member
(p. 161–168). ACM. https://doi.org/10.1145/1228716.1228738.
Burke, J. L., Pratt, K. S., Murphy, R. R., Lineberry, M., Taing, M., & Day, B. (2008).
Toward developing HRI metrics for teams: Pilot testing in the field. In C. R.
Burghart, & A. Steinfeld (Chairs), Proceedings of metrics for human-robot in-
teraction workshop in affiliation with the 3rd ACM/IEEE international conference
of human-robot interaction (HRI 2008). Amsterdam, The Netherlands.
Canning, C., Donahue, T. J., & Scheutz, M. (2014). Investigating human perceptions of
robot capabilities in remote human-robot team tasks based on first-person robot
video feeds IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems. IEEE, Chicago, IL, USA. https://doi.org/10.1109/iros.2014.6943178.
Chen, M., Nikolaidis, S., Soh, H., Hsu, D., & Srinivasa, S. (2020). Trust-aware de-
cision making for human-robot collaboration: Model learning and planning. ACM
Wolf and Stock-Homburg 1731

Transactions on Human-Robot Interaction, 9(29), 1-923. https://doi.org/10.1145/


3359616.
Ciocirlan, S.-D., Agrigoroaie, R., & Tapus, A. (2019). Human-robot team: Effects of
communication in analyzing trust. 28th IEEE International Conference on Robot
and Human Interactive Communication. New Delhi, India. https://doi.org/10.
1109/ro-man46459.2019.8956345.
Claure, H., Chen, Y., Modi, J., Jung, M. F., & Nikolaidis, S. (2020). Multi-armed
bandits with fairness constraints for distributing resources to human teammates.
HRI ’20, Proceedings of the 2020 ACM/IEEE International Conference on
Human-Robot Interaction (pp. 299-308). ACM. https://doi.org/10.1145/3319502.
3374806.
Correia, F., Mascarenhas, S. F., Gomes, S., Arriaga, P., Leite, I., Prada, R., Melo, F. S.,
& Paiva, A. (2019). Exploring prosociality in human-robot teams. Proceedings of
the 14th ACM/IEEE International Conference on Human-Robot Interaction
(pp. 143-151). IEEE. https://doi.org/10.1109/HRI.2019.8673299.
Correia, F., Petisca, S., Alves-Oliveira, P., Ribeiro, T., Melo, F. S., & Paiva, A. (2019).
“I choose YOU!” Membership preferences in human–robot teams. Autonomous
Robots, 43(2), 359-373. Springer Nature. https://doi.org/10.1007/s10514-018-
9767-9.
Crandall, J. W., Nielsen, C. W., & Goodrich, M. A. (2003). Towards predicting robot
team performance. SMC’03 Conference Proceedings. 2003 IEEE International
Conference on Systems, Man and Cybernetics. Conference Theme - System
Security and Assurance. (Cat.: No.03CH37483).
Davis, F. D. (1986). A technology acceptance model for empirically testing new end-
user information systems: Theory and results. [Doctoral dissertation, Massa-
chusetts Institute of Technology]. DSpace@MIT. http://hdl.handle.net/1721.1/
15192.
De Visser, E., Parasuraman, R., Freedy, A., Freedy, E., & Weltman, G. (2006). A
comprehensive methodology for assessing human-robot team performance for
use in training and simulation. Proceedings of the Human Factors and Ergo-
nomics Society Annual Meeting, 50(25), 2639-2643. https://doi.org/10.1177/
154193120605002507.
de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., &
Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in
human–robot teams. International Journal of Social Robotics, 12(2), 459-478.
https://doi.org/10.1007/s12369-019-00596-x.
de Wit, F. R. C., & Greer, L. L. (2008). The black-box deciphered: a meta-analysis of
team diversity, conflict, and team performance. Academy of Management Pro-
ceedings, 1. https://doi.org/10.5465/ambpp.2008.33716526.
DeChurch, L. A., Mesmer-Magnus, J. R., & Doty, D. (2013). Moving beyond re-
lationship and task conflict: Toward a process-state perspective. Journal of Ap-
plied Psychology, 98(4), 559-578. https://doi.org/10.1037/a0032896.
Dell Technologies (2018). Realizing 2030: A divided vision of the future. www.
delltechnologies.com/realizing2030.
1732 Group & Organization Management 48(6)

Demir, M., McNeese, N. J., & Cooke, N. J. (2020). Understanding human-robot teams
in light of all-human teams: Aspects of team interaction and shared cognition.
International Journal of Human-Computer Studies, 140, 102436. https://doi.org/
10.1016/j.ijhcs.2020.102436.
Deng, E., Mutlu, B., & Mataric, M. J. (2019). Embodiment in socially interactive robots.
Foundations and Trends® in Robotics, 7(4), 251-356. https://doi.org/10.1561/2300000056.
Dennis, A. R., Fuller, R. M., & Valacich, J. S. (2008). Media, tasks, and commu-
nication processes: A theory of media synchronicity. MIS Quarterly, 32(3), 575.
https://doi.org/10.2307/25148857.
Dennis, A. R., & Kinney, S. T. (1998). Testing media richness theory in the new media:
The effects of cues, feedback, and task equivocality. Information Systems Re-
search, 9(3), 256-274. https://doi.org/10.1287/isre.9.3.256.
Dias, M. B., Kannan, B., Browning, B., Jones, G., Argall, B., Dias, M. F., Zinck, M.,
Veloso, M. M., Stentz, A., & April, A. (2008). Sliding autonomy for peer-to-peer
human-robot teams (CMU-RI-TR-08-16). Pittsburgh, PA: Carnegie Mellon
University.
Dudenhoeffer, D. D., Bruemmer, D. J., & Davis, M. L. (2001). Modeling and sim-
ulation for exploring human-robot team interaction requirements. Proceeding of
the 2001 Winter Simulation Conference (Cat. No.01CH37304) (11, pp. 730-739).
IEEE. https://doi.org/10.1109/WSC.2001.977361.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems.
Human Factors, 37(1), 32-64. https://doi.org/10.1518/001872095779049543.
Eyssel, F., & Kuchenbrandt, D. (2012). Social categorization of social robots: An-
thropomorphism as a function of robot group membership. The British Journal of
Social Psychology, 51(4), 724-731. https://doi.org/10.1111/j.2044-8309.2011.
02082.x.
Fiore, S. M., Badler, N. L., Boloni, L., Goodrich, M. A., Wu, A. S., & Chen, J. (2011).
Human-robot teams collaborating socially, organizationally, and culturally. Pro-
ceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1),
465-469. https://doi.org/10.1177/1071181311551096.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive
robots. Robotics and Autonomous Systems, 42(3), 143-166. https://doi.org/10.
1016/S0921-8890(02)00372-X.
Fong, T., Nourbakhsh, I., Kunz, C., Fluckiger, L., Schreiner, J., Ambrose, R., Burridge, R.,
Simmons, R., Hiatt, L. M., Schultz, A., Trafton, J. G., Bugajska, M., & Scholtz, J.
(2005). The peer-to-peer human-robot interaction project SPACE Conferences and
Exposition: Space 2005. American Institute of Aeronautics and Astronautics. Long
Beach, California. https://doi.org/10.2514/6.2005-6750.
Fong, T., Scholtz, J., Shah, J. A., Fluckiger, L., Kunz, C., Lees, D., Schreiner, J., Siegel,
M., Hiatt, L. M., Nourbakhsh, I., Simmons, R., Ambrose, R., Burridge, R.,
Antonishek, B., Bugajska, M., Schultz, A., & Trafton, J. G. (2006). A preliminary
study of peer-to-peer human-robot interaction. 2006 IEEE International Con-
ference on Systems, Man and Cybernetics (4, pp. 3198-3203). IEEE. https://doi.
org/10.1109/ICSMC.2006.384609.
Wolf and Stock-Homburg 1733

Fong, T., Thorpe, C., & Baur, C. (2001). Collaboration, dialogue, and human-robot
interaction. In R. A. Jarvis, & Z. Alexander (Eds), Springer Tracts in Advanced
Robotics: Vol. 6, Robotics Research, The Tenth International Symposium, ISRR
2001. Springer.
Forlizzi, J., & DiSalvo, C. F. (2006). Service robots in the domestic environment. HRI ’06,
Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot In-
teraction (p. 258). ACM. https://doi.org/10.1145/1121241.1121286.
Fraune, M. R., Oisted, B. C., Sembrowski, C. E., Gates, K. A., Krupp, M. M., &
Šabanović, S. (2020). Effects of robot-human versus robot-robot behavior and
entitativity on anthropomorphism and willingness to interact. Computers in
Human Behavior, 105. https://doi.org/10.1016/j.chb.2019.106220.
Fraune, M. R., Sabanovic, S., & Smith, E. R. (2017). Teammates first: Favoring
ingroup robots over outgroup humans. 26th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN) (pp. 1432-1437).
IEEE. https://doi.org/10.1109/ROMAN.2017.8172492.
Freedy, A., De Visser, E. J., Gershon, W., & Coeyman, N. C. (2007). Measurement of
trust in human-robot collaboration. International Symposium on Collaborative
Technologies and Systems. Ieee. Orlando, FL, USA. https://doi.org/10.1109/cts.
2007.4621745.
Fuse, Y., & Tokumaru, M. (2020). Social influence of group norms developed by
human-robot groups. IEEE Access, 8, 56081-56091. https://doi.org/10.1109/
ACCESS.2020.2982181.
Gao, F., Cummings, M. L., & Bertuccelli, L. F. (2012). Teamwork in controlling
multiple robots. In HRI ’12. Proceedings of the 7th ACM/IEEE international
conference on Human-robot interaction (pp. 81-88). ACM, New York, NY, USA.
https://doi.org/10.1145/2157689.2157703.
Gervits, F., Thurston, D., Thielstrom, R., Fong, T., Pham, Q., & Scheutz, M. (2020).
Toward genuine robot teammates: Improving human-robot team performance
using robot shared mental models. AAMAS ’20, Proceedings of the 19th In-
ternational Conference on Autonomous Agents and Multi-Agent Systems
(pp. 429-437). International Foundation for Autonomous Agents and Multiagent
Systems.
Giachetti, R. E., Marcelli, V., Cifuentes, J., & Rojas, J. A. (2013). An agent-based
simulation model of human-robot team performance in military environments.
Systems Engineering, 16(1), 15-28. https://doi.org/10.1002/sys.21216.
Gladden, M. E. (2014). The social robot as ’charismatic leader’: A phenomenology of
human submission to nonhuman power. In J. Seibt, R. Hakli, & M. Nørskov (Eds),
Sociable Robots and The Future of Social Relations: Proceedings of Robo-
Philosophy 2014. (Vol. 273). IOS Press, Amsterdam, the Netherlands. https://
doi.org/10.3233/978-1-61499-480-0-329.
Gladstein, D. L. (1984). Groups in context: A model of task group effectiveness.
Administrative Science Quarterly, 29(4), 499-517. https://doi.org/10.2307/
2392936.
1734 Group & Organization Management 48(6)

Gombolay, M. C., Bair, A., Huang, C., & Shah, J. (2017). Computational design of
mixed-initiative human–robot teaming that considers human factors: Situational
awareness, workload, and workflow preferences. The International Journal of
Robotics Research, 36(5-7), 597-617. https://doi.org/10.1177/0278364916688255.
Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. (2015).
Decision-making authority, team efficiency and human worker satisfaction in
mixed human–robot teams. Autonomous Robots, 39(3), 293-312. https://doi.org/
10.1007/s10514-015-9457-9.
Gombolay, M. C., Huang, C., & Shah, J. A. (2015). Coordination of human-robot
teaming with human task preferences. AAAI Fall Symposium: Technical Report
FW-15-01. AAAI. Arlington, VA, USA.
Goodrich, M. A., McLain, T. W., Anderson, J. D., Sun, J., & Crandall, J. W. (2007).
Managing autonomy in robot teams: Observations from four experiments. Pro-
ceedings of the 2007 ACM/IEEE Conference on Human-Robot Interaction: Robot
as team member (pp. 25–32). ACM. https://doi.org/10.1145/1228716.1228721.
Green, S. A., Billinghurst, M., Chen, X., & Chase, J. G. (2008). Human-robot collab-
oration: A literature review and augmented reality approach in design. International
Journal of Advanced Robotic Systems, 5(1), 1. https://doi.org/10.5772/5664.
Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human–robot
teams. International Studies, 8(3), 483-500. https://doi.org/10.1075/is.8.3.10gro.
Harriott, C. E., Zhang, T., & Adams, J. A. (2011). Evaluating the applicability of
current models of workload to peer-based human-robot teams. HRI ’11, Pro-
ceedings of the 6th ACM/IEEE International Conference on Human-Robot In-
teraction. ACM. https://doi.org/10.1109/ROMAN.2016.7745257.
Hayes, B., & Scassellati, B. (2014). Challenges in shared-environment human-robot
collaboration. AI Matters, 1(2), 22-23. https://doi.org/10.1145/2685328.2685335.
Hentschel, T, Heilman, ME, & Peus, CV (2019). The multiple dimensions of gender
stereotypes: A current look at men’s and women’s characterizations of others and
themselves. Frontiers in Psychology, 10, 11. https://doi.org/10.3389/fpsyg.2019.
00011.
Hiatt, L. M., Harrison, A. M., & Trafton, J. G. (2011). Accommodating human
variability in human-robot teams through theory of mind. In: IJCAI’11 Proceedings
of the Twenty-Second International Joint Conference on Artificial Intelligence -
(Vol 3, pp. 2066-2071). AAAI Press. https://doi.org/10.5591/978-1-57735-516-8/
IJCAI11-345.
Hiatt, L. M., & Trafton, J. G. (2010). A cognitive model of theory of mind. Proceedings
of the 10th International Conference on Cognitive Modeling, ICCM, pp. 91-96.
High-Level Expert Group on Artificial Intelligence (2019). A definition of AI:
Main capabilities and scientific disciplines. Brussels, Belgium. https://digital-
strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-
scientific-disciplines.
Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. AIAA 1st
Intelligent Systems Technical Conference. AIAA. Chicago, IL, USA. https://doi.
org/10.2514/6.2004-6434.
Wolf and Stock-Homburg 1735

Hoffman, G., & Breazeal, C. (2007). Effects of anticipatory action on human-robot


teamwork efficiency, fluency, and perception of team. Proceedings of the 2007
ACM/IEEE Conference on Human-Robot Interaction: Robot as team member
(pp. 1–8). ACM. https://doi.org/10.1145/1228716.1228718.
Hogg, M. A., & Turner, J. C. (1985). Interpersonal attraction, social identification and
psychological group formation. European Journal of Social Psychology, 15(1),
51-66. https://doi.org/10.1002/ejsp.2420150105.
Hollenbeck, J. R., Beersma, B., & Schouten, M. E. (2012). Beyond team types and
taxonomies: A dimensional scaling conceptualization for team description.
Academy of Management Review, 37(1), 82-106. https://doi.org/10.5465/armr.
2010.0181.
Iqbal, T., Gonzales, M. J., & Riek, L. D. (2015). Joint action perception to enable fluent
human-robot teamwork. 24th IEEE International Symposium on Robot and
Human Interactive Communication. (pp. 404-406). IEEE. https://doi.org/10.1109/
roman.2015.7333671.
Iqbal, T., Rack, S., & Riek, L. D. (2016). Movement coordination in human–robot
teams: A dynamical systems approach. IEEE Transactions on Robotics, 32(4),
909-919. https://doi.org/10.1109/TRO.2016.2570240.
Iqbal, T., & Riek, L. D. (2017). Coordination dynamics in multihuman multirobot
teams. IEEE Robotics and Automation Letters, 2(3), 1712-1717. https://doi.org/
10.1109/LRA.2017.2673864.
Jager, K. J., Tripepi, G, Chesnaye, N. C., Dekker, FW, Zoccali, C, & Stel, VS (2020).
Where to look for the most frequent biases? Nephrology, 25(6), 435-441. https://
doi.org/10.1111/nep.13706.
Jiang, L., & Wang, Y. (2019). Respect your emotion: Human-multi-robot teaming
based on regret decision model. In IEEE 15th International Conference on Au-
tomation Science and Engineering (CASE) (pp. 936–941). IEEE. https://doi.org/
10.1109/COASE.2019.8843206.
Jung, M. F., Lee, J. J., DePalma, N., Adalgeirsson, S. O., Hinds, P. J., & Breazeal, C.
(2013). Engaging robots: Easing complex human-robot teamwork using back-
channeling. CSCW 2013: Proceedings of the 2013 ACM conference on computer
supported cooperative work (pp. 1555-1566). ACM. https://doi.org/10.1145/
2441776.2441954.
Jung, M. F., Martelaro, N., & Hinds, P. J. (2015). Using robots to moderate team
conflict: The case of repairing violations. HRI’15, Proceedings of the 10th ACM/
IEEE International Conference on Human-Robot Interaction (pp. 229-236). IEEE.
https://doi.org/10.1145/2696454.2696460.
Jung, M. F., Šabanović, S., Eyssel, F., & Fraune, M. R. (2017). Robots in groups and
teams. Companion of the 2017 ACM Conference on Computer Supported Co-
operative Work and Social Computing (pp. 401-407). ACM. https://doi.org/10.
1145/3022198.3022659.
Kanfer R., Chen G., & Pritchard R. D. (Eds), (2008). Work motivation: Past, present
and future (Vol. 27). Routledge/Taylor & Francis Group.
1736 Group & Organization Management 48(6)

Kantor, G., Singh, S., Peterson, R., Rus, D., Das, A., Kumar, V., Pereira, G., & Spletzer,
J. (2006). Distributed search and rescue with robot and sensor teams. In S. Yuta, H.
Asama, E. Prassler, T. Tsubouchi, & S. Thrun (Eds), Field and service robotics:
Recent advances in reserch and applications (pp. 529-538). Springer Berlin
Heidelberg. https://doi.org/10.1007/10991459_51.
Kelley, T. L. (1927). Interpretation of educational measurements. Measurement and
adjustment Series. World Book Co.
Kelly, R., & Watts, L. (2017). Slow but likeable? Inefficient robots as caring team
members. In M. F. Jung, S. Sabanovic, F. Eyssel, & M. R. Fraune (Chairs) (Eds),
Robots in groups and teams: A CSCW 2017 Workshop. https://hri.cornell.edu/
robots-in-groups/.
Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and
Autonomous Systems, 58(3), 322-332. https://doi.org/10.1016/j.robot.2009.09.015.
Kittmann, R., Fröhlich, T., Schäfer, J., Reiser, U., Weißhardt, F., & Haug, A. (2015).
Let me introduce myself: I am Care-O-bot 4, a gentleman robot. In S. Diefenbach,
N. Henze, & M. Pielot (Eds), Mensch und Computer 2015 – Proceedings
(pp. 223-232). De Gruyter Oldenbourg.
Koppula, H. S., Jain, A., & Saxena, A. (2016). Anticipatory planning for human-robot
teams. In M. A. Hsieh, O. Khatib, & V. Kumar (Eds), Experimental Robotics: The
14th International Symposium on Experimental Robotics (109th ed., pp. 453-470).
Springer International Publishing. https://doi.org/10.1007/978-3-319-23778-7_30.
Kozlowski, S. W. J., & Bell, B. S. (2003). Work groups and teams in organizations. In
W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds), Handbook of Psychology:
Industrial and Organizational Psychology/Industrial and organizational psy-
chology (Vol. 12, pp. 333-375). John Wiley & Sons. https://doi.org/10.1002/
0471264385.wei1214.
Krämer, N. C., Pütten, A., & Eimler, S. (2012). Human-agent and human-robot in-
teraction theory: Similarities to and differences from human-human interaction. In
M. Zacarias, & J. V. de Oliveira (Eds), Studies in Computational Intelligence.
Human-Computer Interaction: The Agency Perspective (396, pp. 215-240).
Springer. https://doi.org/10.1007/978-3-642-25691-2_9.
Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social
interaction in computer-supported collaborative learning environments: a review
of the research. Computers in Human Behavior, 19(3), 335-353. https://doi.org/10.
1016/s0747-5632(02)00057-2.
Kruijff-Korbayová, I., Colas, F., Gianni, M., Pirri, F., Greeff, J., Hindriks, K., Neerincx,
M. A., Ögren, P., Svoboda, T., & Worst, R. (2015). TRADR project: Long-term
human-robot teaming for robot assisted disaster response. Künstliche Intelligenz,
29(2), 193-201. https://doi.org/10.1007/s13218-015-0352-5.[Rainer]
Kruijff, G. J. M., Janı́ček, M., Keshavdas, S., Larochelle, B., Zender, H., Smets,
N. J. J. M., Mioch, T., Neerincx, M. A., Diggelen, J. V., Colas, F., Liu, M.,
Pomerleau, F., Siegwart, R., Hlaváč, V., Svoboda, T., Petřı́ček, T., Reinstein,
M., Zimmermann, K., Pirri, F., ... Gianni, M., Papadakis, P., Sinha, A.,
Balmer, P., Tomatis, N., Worst, R., Linder, T., Surmann, H., Tretyakov, V.,
Wolf and Stock-Homburg 1737

Corrao, S., Pratzler-Wanczura, S., & Sulk, M. (2014). Experience in system


design for human-robot teaming in urban search and rescue. In K. Yoshida, &
S. Tadokoro (Eds), Springer Tracts in Advanced Robotics. Field and Service
Robotics (92, pp. 111-125). Springer. https://doi.org/10.1007/978-3-642-
40686-7_8.
Kruijff, G.-J., Janicek, M., & Zender, H. (2012). Situated communication for joint
activity in human-robot teams. IEEE Intelligent Systems, 27(2), 27-35. https://doi.
org/10.1109/mis.2012.8.
Kruijff, G. J. M., Kruijff-Korbayová, I., Keshavdas, S., Larochelle, B., Janı́ček, M.,
Colas, F., Liu, M., Pomerleau, F., Siegwart, R., Neerincx, M. A., Looije, R., Smets,
N. J. J. M., Mioch, T., van Diggelen, J., Pirri, F., Gianni, M., Ferri, F., Menna, M.,
Worst, R., Linder, T., Tretyakov, V., Surmann, H., Svoboda, T., Reinštein, M.,
Zimmermann, K., Petřı́ček, T., & Hlaváč, V. (2014). Designing, developing, and
deploying systems to support human–robot teams in disaster response. Advanced
Robotics, 28(23), 1547-1570. https://doi.org/10.1080/01691864.2014.985335.
Kuchenbrandt, D., Eyssel, F., Bobinger, S., & Neufeld, M. (2013). When a robot’s
group membership matters. International Journal of Social Robotics, 5(3),
409-417. https://doi.org/10.1007/s12369-013-0197-8.
Kwon, M., Li, M., Bucquet, A., & Sadigh, D. (2019). Influencing leading and fol-
lowing in human-robot teams. Proceedings of Robotics: Science Systems. https://
doi.org/10.15607/rss.2019.xv.075.
Law, T., Chita-Tegmark, M., & Scheutz, M. (2020). The interplay between emotional
intelligence, trust, and gender in human–robot interaction. International Journal
of Social Robotics, 13(3), 297-309. https://doi.org/10.1007/s12369-020-00624-1.
Lee, P.-J., Wang, H., Chien, S.-Y., Lewis, M., Scerri, P., Velagapudi, P., Sycara, K., &
Kane, B. (2010). Teams for teams: Performance in multi-human/multi-robot
teams. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 54(4), 438-442. https://doi.org/10.1177/154193121005400435.
Lei, X., & Rau, P.-L. P. (2020). Should I blame the human or the robot? Attribution
within a human–robot group. International Journal of Social Robotics, 13,
363-377. https://doi.org/10.1007/s12369-020-00645-w.
LePine, J. A., Piccolo, R. F., Jackson, C. L., Mathieu, J. E., & Saul, J. R. (2008). A
meta-analysis of teamwork processes: Tests of a multidimensional model and
relationships with team effectiveness criteria. Personnel Psychology, 61(2),
273-307. https://doi.org/10.1111/j.1744-6570.2008.00114.x.
Levitt, S. D., & List, J. A. (2005). What do laboratory experiments tell us about the real
world?. Journal of Economic Perspectives, 21.
Lewis, M., Wang, H., Chien, S.-Y., Scerri, P., Velagapudi, P., Sycara, K., & Kane, B.
(2010). Teams organization and performance in multi-human/multi-robot teams.
IEEE International Conference on Systems, Man and Cybernetics
(pp. 1617-1623). IEEE. https://doi.org/10.1109/icsmc.2010.5642379.
Liu, C., & Tomizuka, M. (2014). Modeling and controller design of cooperative robots
in workspace sharing human-robot assembly teams. IEEE/RSJ International
1738 Group & Organization Management 48(6)

Conference on Intelligent Robots and Systems. Chicago, IL, USA. https://doi.org/


10.1109/iros.2014.6942738.
Lo, S.-Y., Short, E. S., & Thomaz, A. L. (2020). Planning with partner uncertainty
modeling for efficient information revealing in teamwork. HRI’20, Proceedings of
the 2020 ACM/IEEE International Conference on Human-Robot Interaction
(pp. 319-327). ACM. https://doi.org/10.1145/3319502.3374827.
Malone, T. W., & Crowston, K. (1990). What is coordination theory and how can it
help design cooperative work systems? In F. Halasz (Ed), Proceedings of the 1990
ACM conference on Computer-supported cooperative work - CSCW (90,
pp. 357-370). ACM Press. https://doi.org/10.1145/99332.99367.
Manikonda, V., Ranjan, P., & Kulis, Z. (2007). A mixed Initiative controller and
testbed for human robot teams in tactical operations. In T. W. Finin (Ed), Technical
Report: FS-07-06. Regarding the “intelligence” in distributed intelligent systems:
Papers from the AAAI Fall Symposium. AAAI Press.
Marble, J. L., Bruemmer, D. J., & Few, D. A. (2003). Lessons learned from usability
tests with a collaborative cognitive workspace for human-robot teams. SMC’03
Conference Proceedings. 2003 IEEE International Conference on Systems.
Washington, DC, USA: Man and Cybernetics.
Marble, J. L., Bruemmer, D. J., Few, D. A., & Dudenhoeffer, D. D. (2004). Evaluation
of supervisory vs. peer-peer interaction with human-robot teams. 37th Annual
Hawaii International Conference on System Sciences. USA: Big Island, HI.
https://doi.org/10.1109/hicss.2004.1265326.
Marge, M. R., Pappu, A. K., Frisch, B., Harris, T. K., & Rudnicky, A. (2009). Ex-
ploring spoken dialog interaction in human-robot teams. In S. Balakirsky, S.
Carpin, & M. Lewis (Chairs (Eds), Proceedings of the International Conference on
Intelligent Robots and Systems (IROS 2009), Workshop on Robots, Games, and
Research: Success stories in USARSim. St. Louis, MO, USA. http://repository.
cmu.edu/compsci/1351.
Marsh, H. W., Pekrun, R., Parker, P. D., Murayama, K., Guo, J., Dicke, T., & Arens,
A. K. (2019). The murky distinction between self-concept and self-efficacy:
Beware of lurking jingle-jangle fallacies. Journal of Educational Psychology,
111(2), 331-353. https://doi.org/10.1037/edu0000281.
McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific
technology. ACM Transactions on Management Information Systems, 2(2), 1-25.
https://doi.org/10.1145/1985347.1985353.
Minato, T., Shimada, M., Ishiguro, H., & Itakura, S. (2004). Development of an
android robot for studying human-robot interaction. In B. Orchard, C. Yang, & M.
Ali (Eds), Innovations in Applied Artificial Intelligence (pp. 424-434). Springer
Berlin Heidelberg. https://doi.org/10.1007/978-3-540-24677-0_44.
Mingyue Ma, L., Fong, T., Micire, M. J., Kim, Y. K., Feigh, K., & Mingyue Ma, L.
(2018). Human-robot teaming: concepts and components for design. In M. Hutter
& R. Siegwart (Eds), Field and Service Robotics (Vol. 5, pp. 649-663). Springer
International Publishing. https://doi.org/10.1007/978-3-319-67361-5_42.
Wolf and Stock-Homburg 1739

Musić, S., & Hirche, S. (2018). Passive noninteracting control for human-robot team
interaction. IEEE Conference on Decision and Control (CDC). Miami Beach, FL,
USA. https://doi.org/10.1109/cdc.2018.8619289.
Music, S, Salvietti, G, Dohmann, PBG, Chinello, F, Prattichizzo, D, & Hirche, S
(2019). Human–robot team interaction through wearable haptics for cooperative
manipulation. IEEE Transactions on Haptics, 12(3), 350-362. https://doi.org/10.
1109/TOH.2019.2921565.
Nakano, H., & Goodrich, M. A. (2015). Graphical narrative interfaces: Representing
spatiotemporal information for a highly autonomous human-robot team. 24th
IEEE International Symposium on Robot and Human Interactive Communication.
Kobe, Japan. https://doi.org/10.1109/roman.2015.7333684.
Nass, C., Streuer, J., & Tauber, E. R. (1994). Computers are social actors. In B.
Adelson, S. Dumais, & J. Olson (Eds), CHI ’94, CHI ’94: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (pp. 72-78). ACM.
https://doi.org/10.1145/191666.191703.
Natarajan, M., & Gombolay, M. C. (2020). Effects of anthropomorphism and ac-
countability on trust in human robot interaction. HRI’20, Proceedings of the 2020
ACM/IEEE International Conference on Human-Robot Interaction (pp. 33-42).
ACM. https://doi.org/10.1145/3319502.3374839.
Nevatia, Y., Stoyanov, T., Rathnam, R., Pfingsthorn, M., Markov, S., Ambrus, R., &
Birk, A. (2008). Augmented autonomy: Improving human-robot team performance
in urban search and rescue. IEEE/RSJ international Conference on Intelligent
Robots and Systems. Nice, France. https://doi.org/10.1109/iros.2008.4651034.
Nikolaidis, S., Kwon, M., Forlizzi, J., & Srinivasa, S. (2018). Planning with verbal
communication for human-robot collaboration. Journal of Human-Robot In-
teraction, 7(322), 1-2221. https://doi.org/10.1145/3203305.
Nikolaidis, S., Lasota, P., Ramakrishnan, R., & Shah, J. (2015). Improved human–
robot team performance through cross-training, an approach inspired by human
team training practices. The International Journal of Robotics Research, 34(14),
1711-1730. https://doi.org/10.1177/0278364915609673.
Nikolaidis, S., & Shah, J. (2012). Human-robot teaming using shared mental models.
HRI’12. Proceedings of the 7th ACM/IEEE international conference on Human-
Robot Interaction. ACM.
Nikolaidis, S., & Shah, J. (2013). Human-robot cross-training: Computational for-
mulation, modeling and evaluation of a human team training strategy. Proceedings
of the 8th ACM/IEEE International Conference on Human-Robot Interaction.
https://doi.org/10.1109/hri.2013.6483499.
Nourbakhsh, I. R., Sycara, K., Koes, M., Yong, M., Lewis, M., & Burion, S. (2005).
Human-robot teaming for search and rescue. IEEE Pervasive Computing, 4(1),
72-78. https://doi.org/10.1109/MPRV.2005.13.
Oh, J., Navarro-Serment, L., Suppe, A., Stentz, A., & Hebert, M. (2015). Inferring door
locations from a teammate’s trajectory in stealth human-robot team operations
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) (pp. 5315-5320). IEEE. https://doi.org/10.1109/IROS.2015.7354127.
1740 Group & Organization Management 48(6)

Oleson, K. E., Billings, D. R., Kocsis, V., Chen, J. Y. C., & Hancock, P. A. (2011).
Antecedents of trust in human-robot collaborations. 2011 IEEE International
Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and
Decision Support. Miami Beach, FL, USA: CogSIMA). https://doi.org/10.1109/
cogsima.2011.5753439.
Onnasch, L., & Roesler, E. (2020). A taxonomy to structure and analyze human–robot
interaction. International Journal of Social Robotics. https://doi.org/10.1007/
s12369-020-00666-5.
Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. (2013). Building appropriate trust in
human-robot teams. AAAI Spring Symposium: Trust and Autonomous Systems.
Pages, J., Marchionni, L., & Ferro, F. (2016). TIAGo: The modular robot that adapts to
different research needs. International Workshop on Robot Modularity. Daejeon,
Korea.
Pandey, A. K., & Gelin, R. (2018). A Mass-Produced Sociable Humanoid Robot:
Pepper: The First Machine of Its Kind. IEEE Robotics & Automation Magazine,
(25)(1), 40-48. https://doi.org/10.1109/MRA.2018.2833157.
Phillips, E., Ososky, S., Swigert, B., & Jentsch, F. (2012). Human-animal teams as an
analog for future human-robot teams. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 56(1), 1553-1557. https://doi.org/10.1177/
1071181312561309.
Phillips, E., Schaefer, K. E., Billings, D. R., Jentsch, F., & Hancock, P. A. (2016).
Human-animal teams as an analog for future human-robot teams: Influencing
design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100-125.
https://doi.org/10.5898/JHRI.5.1.
Pina, P., Cummings, M. L., Crandall, J. W., & Della Penna, M. (2008). Identifying
generalizable metric classes to evaluate human-robot teams. In C. R. Burghart, &
A. Steinfeld (Chairs (Eds), Proceedings of metrics for human-robot interaction
workshop in affiliation with the 3rd ACM/IEEE international conference of
human-robot interaction (HRI 2008). Amsterdam, The Netherlands.
Ranzato, A. J., & Vertesi, J. (2017). From I-robot to we-robot: Effects of team structure
on robotic tasks. In M. F. Jung, S. Sabanovic, F. Eyssel, & M. R. Fraune (Chairs
(Eds), Robots in groups and teams: A CSCW 2017 workshop. https://hri.cornell.
edu/robots-in-groups/.
Richert, A., ShehadehMüller, S., Schröder, S., & Jeschke, S. (2016). Robotic
workmates: Hybrid human-robot-teams in the Industry 4.0. In R. M. Idrus, & N.
Zainuddin (Eds), ICEL2016-Proceedings of the 11th International Conference on
e-Learning: ICEl2016. Academic Conferences and Publishing International
Limited.
Robert, L. P.Jr. (2018). Motivational theory of human robot teamwork. International
Robotics & Automation Journal, 4(4), 248-251. https://doi.org/10.15406/iratj.
2018.04.00131.
Robert, L. P.Jr., & You, S. (2015). Subgroup formation in teams working with robots. In B.
Begole, J. Kim, K. Inkpen, & W. Woo (Eds), Proceedings of the 33rd Annual ACM
Wolf and Stock-Homburg 1741

Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA


(15, pp. 2097-2102). ACM Press. https://doi.org/10.1145/2702613.2732791.
Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and
limits in the search for mental models. Psychological Bulletin, 100(3), 349-363.
https://doi.org/10.1037/0033-2909.100.3.349.
Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition and
Personality, 9(3), 185-211. https://doi.org/10.2190/DUGG-P24E-52WK-6CDG.
Samani, H. A., & Cheok, A. D. (2011). From human-robot relationship to robot-based
leadership. 4th International Conference on Human System Interactions. Japan:
Yokohama. https://doi.org/10.1109/hsi.2011.5937363.
Scassellati, B, & Vázquez, M (2020). The potential of socially assistive robots during
infectious disease outbreaks. Science Robotics, 5(44). https://doi.org/10.1126/
scirobotics.abc9014.
Scheutz, M., DeLoach, S. A., & Adams, J. A. (2017). A framework for developing and
using shared mental models in human-agent teams. Journal of Cognitive Engineering
and Decision Making, 11(3), 203-224. https://doi.org/10.1177/1555343416682891.
Schmitt, C., Schäfer, J., & Burmester, M. (2017). Wie wirkt der Care-O-bot 4 im
Verkaufsraum? https://doi.org/10.18420/muc2017-up-0171.
Schwartz, T., Zinnikus, I., Krieger, H.-U., Bürckert, C., Folz, J., Kiefer, B., Hevesi, P.,
Lüth, C., Pirkl, G., Spieldenner, T., Schmitz, N., Wirkus, M., & Straube, S. (2016).
Hybrid teams: Flexible collaboration between humans, robots and virtual agents. In
M. Klusch, R. Unland, O. Shehory, A. Pokahr, & S. Ahrndt (Eds), Lecture Notes in
Computer Science. Multiagent System Technologies (Vol. 9872, pp. 131-146).
Springer International Publishing. https://doi.org/10.1007/978-3-319-45889-2_10.
Scrum Alliance (2021). The scrum team roles and accountabilities. Retrieved Sep-
tember 17, 2021, from: https://resources.scru2007 ACM/IEEE Conferemalliance.
org/Article/scrum-team.
Sellner, B., Heger, F. W., Hiatt, L. M., Simmons, R., & Singh, S. (2006). Coordinated
multiagent teams and sliding autonomy for large-scale assembly. Proceedings of
the IEEE, 94(7), 1425-1444. https://doi.org/10.1109/jproc.2006.876966.
Shah, J., & Breazeal, C. (2010). An empirical analysis of team coordination behaviors
and action planning with application to human–robot teaming. Human Factors,
52(2), 234-245. https://doi.org/10.1177/0018720809350882.
Shah, J., Wiken, J., Williams, B., & Breazeal, C. (2011). Improved human-robot team
performance using Chaski, a human-inspired plan execution system. HRI’11,
Proceedings of the 6th ACM/IEEE International Conference on Human-Robot
Interaction (pp. 29-36). ACM. https://doi.org/10.1145/1957656.1957668.
Sierhuis, M., Bradshaw, J. M., Acquisti, R., van Hoof, R., & Jeffers, R. (2003). Human-agent
teamwork and adjustable autonomy in practice. Proceedings of the 7th International
Symposium on Artificial Intelligence. Nara, Japan: Robotics and Automation in Space.
Stewart, A., Cao, M., Nedic, A., Tomlin, D., & Leonard, N. (2012). Towards human–
robot teams: Model-based analysis of human decision making in two-alternative
choice tasks with social feedback. Proceedings of the IEEE, 100(3), 751-775.
https://doi.org/10.1109/jproc.2011.2173815.
1742 Group & Organization Management 48(6)

Stock, R. (2003). Teams an der Schnittstelle zwischen Anbieter- und Kunden-


Unternehmen: Eine integrative Betrachtung. Springer-Verlag.
Stock, R. (2004). Drivers of team performance: What do we know and what have we
still to learn?. Schmalenbach Business Review, 56(3), 274-306. https://doi.org/10.
1007/BF03396696.
Stock, R., Merkle, M., Eidens, D., Hannig, M., Heineck, P., Nguyen, M. A., & Völker,
J. (2019). When robots enter our workplace: Understanding employee trust in
assistive robots. Fortieth International Conference on Information Systems.
Munich, Germany.
Strohkorb Sebo, S., Dong, L. L., Chang, N., & Scassellati, B. (2020). Strategies for the
inclusion of human members within human-robot teams. HRI ’20, Proceedings of
the 2020 ACM/IEEE International Conference on Human-Robot Interaction
(pp. 309-317). ACM. https://doi.org/10.1145/3319502.3374808.
Strohkorb Sebo, S., Traeger, M. L., Jung, M. F., & Scassellati, B. (2018). The ripple
effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in
human-robot teams. HRI’18, Proceedings of the 2018 ACM/IEEE International
Conference on Human-Robot Interaction (pp. 178-186). ACM https://doi.org/10.
1145/3171221.3171275.
Tajfel, H. (1974). Social identity and intergroup behaviour. Information (International
Social Science Council), 13(2), 65-93. https://doi.org/10.1177/053901847401300204.
Talamadupula, K., Briggs, G., Chakraborti, T., Scheutz, M., & Kambhampati, S.
(2014). Coordination in human-robot teams using mental modeling and plan
recognition. IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems. Chicago, IL, USA. https://doi.org/10.1109/iros.2014.6942970.
Tamburrini, G. (2009). Robot ethics: A view from the philosophy of science. Ethics
and Robotics, 11–22.
Tang, F., & Parker, L. E. (2006). Peer-to-peer human-robot teaming through re-
configurable schemas. AAAI Spring Symposium: To Boldly Go Where No
Human-Robot Team Has Gone Before. Technical Report SS-06-07.
Traeger, M. L., Strohkorb Sebo, S., Jung, M., Scassellati, B., & Christakis, N. A.
(2020). Vulnerable robots positively shape human conversational dynamics in
a human–robot team. Proceedings of the National Academy of Sciences, 117(12),
6370-6375. https://doi.org/10.1073/pnas.1910402117.
SIM TU Darmstadt (2021). Robotergalerie. https://www.sim.informatik.tu-darmstadt.
de/res/robotics_lab/.
van Breukelen, W., Schyns, B., & Le Blanc, P. (2006). Leader-member exchange
theory and research: Accomplishments and future challenges. Leadership, 2(3),
295-316. https://doi.org/10.1177/1742715006066023.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of
Information Technology: Toward a Unified View. MIS Quarterly, (27:3), 425-478.
https://doi.org/10.2307/30036540.
Wang, H., Lewis, M., & Chien, S.-Y. (2010). Teams organization and performance
analysis in autonomous human-robot teams. PerMIS ’10, Proceedings of the 10th
Wolf and Stock-Homburg 1743

Performance Metrics for Intelligent Systems Workshop (pp. 251-257). ACM, New
York, NY, USA. https://doi.org/10.1145/2377576.2377622.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016a). The impact of POMDP-generated
explanations on trust and performance in human-robot teams. Proceedings of the
2016 International Conference on Autonomous Agents & Multiagent Systems
(pp. 997-1005). International Foundation for Autonomous Agents and Multiagent
Systems, Richland, SC.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016b). Trust calibration within a human-
robot team: Comparing automatically generated explanations. HRI’16: The
Eleventh ACM/IEEE International Conference on Human Robot Interaction.
IEEE. https://doi.org/10.1109/hri.2016.7451741.
Wang, N., Pynadath, D. V., Rovira, E., Barnes, M. J., & Hill, S. G. (2018). Is it my
looks? Or something I said? The impact of explanations, embodiment, and ex-
pectations on trust and performance in human-robot teams. In J. R. C. Ham, E.
Karapanos, P. P. Morita, & C. M. Burns (Eds), Persuasive Technology (pp. 56-69).
Springer International Publishing. https://doi.org/10.1007/978-3-319-78978-1_5.
Wang, J., Wang, H., & Lewis, M. (2008). Assessing cooperation in human control of
heterogeneous robots. HRI ’08, Proceedings of the 3rd ACM/IEEE International
Conference on Human Robot Interaction (pp. 9-16). ACM. https://doi.org/10.
1145/1349822.1349825.
Williams, T., Briggs, P., & Scheutz, M. (2015). Covert robot-robot communication:
Human perceptions and implications for HRI. Journal of Human-Robot Inter-
actionWilliams, 4(2), 23. https://doi.org/10.5898/JHRI.4.2.
Wolf, F. D., & Stock-Homburg, R. M. (2021). Making the first step towards robotic
leadership – hiring decisions for robotic team leader candidates. ICIS 2021
Proceedings. https://aisel.aisnet.org/icis2021/hci_robot/hci_robot/2.
Woods, D. D., Tittle, J., Feil, M., & Roesler, A. (2004). Envisioning human-robot
coordination in future operations. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), 34(2), 210-218. https://doi.org/
10.1109/tsmcc.2004.826272.
Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The de-
velopment of a human–robot interaction trust scale. International Journal of
Social Robotics, 4(3), 235-248. https://doi.org/10.1007/s12369-012-0144-0.
Yamaji, Y., Miyake, T., Yoshiike, Y., De Silva, P. R. S., & Okada, M. (2011). STB:
Child-dependent sociable trash box. International Journal of Social Robotics,
3(4), 359-370. https://doi.org/10.1007/s12369-011-0114-y.
Yazdani, F., Brieber, B., & Beetz, M. (2016). Cognition-enabled robot control for
mixed human-robot rescue teams. In E. Menegatti, N. Michael, K. Berns, & H.
Yamaguchi (Eds), Intelligent Autonomous Systems 13. Springer International
Publishing. https://doi.org/10.1007/978-3-319-08338-4_98.
Yi, D., & Goodrich, M. A. (2014). Supporting task-oriented collaboration in human-
robot teams using semantic-based path planning. In R. E. Karlsen, D. W. Gage,
C. M. Shoemaker, & G. R. Gerhart (Chairs (Eds), Baltimore, MD, US: SPIE
Defense + Security.
1744 Group & Organization Management 48(6)

You, S., & Robert, L. P. Jr. (2016). Curiosity vs. control: Impacts of training on
performance of teams working with robots. CSCW ’16 Companion: Proceedings
of the 19th ACM Conference on Computer Supported Cooperative Work and
Social Computing Companion (pp. 449-452). Association for Computing Ma-
chinery. https://doi.org/10.1145/2818052.2869121.
You, S., & Robert, L. P. Jr. (2017). Emotional attachment, performance, and viability in
teams collaborating with embodied physical action (EPA) robots. Journal of the
Association for Information Systems, 19(5), 377-407.
You, S., & Robert, L. P. Jr. (2018a). Human-robot similarity and willingness to work
with a robotic co-worker. HRI’18, Proceedings of the 2018 ACM/IEEE In-
ternational Conference on Human-Robot Interaction (pp. 251-260). ACM. https://
doi.org/10.1145/3171221.3171281.
You, S., & Robert, L. P. Jr. (2018b). Teaming up with robots: An IMOI (inputs-
mediators-outputs-inputs) framework of human-robot teamwork. International
Journal of Robotic Engineering, 2(3).
You, S., & Robert, L. P. Jr. (2019a). Subgroup formation in human-robot teams
Fortieth International Conference on Information Systems. Munich, Germany.
You, S., & Robert, L. P. Jr. (2019b). Trusting robots in teams: Examining the impacts of
trusting robots on team performance and satisfaction. Proceedings of the 52th
Hawaii International Conference on System Sciences. USA: Maui, HI.
Zaheer, A., McEvily, B., & Perrone, V. (1998). Does trust matter? Exploring the effects
of interorganizational and interpersonal trust on performance. Organization
Science, 9(2), 141-159. https://doi.org/10.1287/orsc.9.2.141.
Zheng, K., Glas, D. F., Kanda, T., Ishiguro, H., & Hagita, N. (2013). Designing and
implementing a human–robot team for social interactions. IEEE Transactions on
Systems, Man, and Cybernetics: Systems, 43(4), 843-859. https://doi.org/10.1109/
TSMCA.2012.2216870.

Associate Editor: Sarah Bankins


Submitted Date: February 14, 2021
Revised Submission Date: December 16, 2021
Acceptance Date: December 25, 2021

Author Biographies
Franziska Doris Wolf is a Ph.D. student at the chair for Marketing and Human
Resource Management at the Technical University of Darmstadt, Germany. Her re-
search interests include the introduction and establishment of human-robot teams in an
organizational context franziska.wolf@bwl.tu-darmstadt.de
Ruth Maria Stock-Homburg is Professor of Marketing and Human Resource
Management at the Technical University of Darmstadt, Germany and founder of the
leap in time Research Institute. She holds a Ph.D. in Economics and a Ph.D. in
Psychology. Her research interests include Human-Robot Interaction, Future of Work,
Leadership and User Innovation. rsh@bwl.tu-darmstadt.de

You might also like