2023 - How and When Can Robots Be Team Members
2023 - How and When Can Robots Be Team Members
2023 - How and When Can Robots Be Team Members
Decades of Research on
Human–Robot Teams
Abstract
Artificial intelligence and robotic technologies have grown in sophistication
and reach. Accordingly, research into mixed human–robot teams that
comprise both robots and humans has expanded as well, attracting the at-
tention of researchers from different disciplines, such as organizational be-
havior, human–robot interaction, cognitive science, and robotics. With this
systematic literature review, the authors seek to establish deeper insights into
existing research and sharpen the definitions of relevant terms. With a close
consideration of 150 studies published between 1990 and 2020 that in-
vestigate mixed human–robot teams, conceptually or empirically, this article
provides both a systematic evaluation of extant research and propositions for
further research.
Keywords
mixed human–robot team, technology, team dynamics/processes, intra-team
dynamics, inter-team dynamics, robotic teammate, robotic leader, robotic
team assistant, robotic roles, overview
1
Chair for Marketing and Human Resource Management, Technical University of Darmstadt,
Darmstadt, Germany
Corresponding Author:
Franziska Doris Wolf, Chair for Marketing and Human Resource Management, Technical
University of Darmstadt, Hochschulstraße 1, 64289, Darmstadt, Germany.
Email: franziska.wolf@bwl.tu-darmstadt.de
Wolf and Stock-Homburg 1667
In many current work settings, humans partner with robots to accomplish tasks
in various fields. Many of these robots can be classified as social robots, which
interact with humans in natural ways that feature speech, gestures, and facial
expressions (Breazeal, 2003). Unlike industrial robots, they work like unique,
contributing members of organizations and so-called human–robot teams
(HRTs) (Hoffman & Breazeal, 2004).
The presence and uses of such teams are growing, especially in the face of
the various restrictions imposed by the COVID-19 pandemic (Scassellati &
Vázquez, 2020). An estimated 82% of business leaders already believed in
2018 that HRTs would be a daily reality within 5 years (Dell Technologies,
2018); when we recently surveyed 596 U.S. employees1 (65% men, mean
age = 36.92 years, SD = 10.85 years), we learned that they could easily
imagine working with a robot as teammate (39%), team assistant (50%), or
even team leader (34%). For example, robots can track projects, perform real-
time scheduling, and support complex organizational decision-making
processes.
Even as these uses and imagined applications expand though, research on
HRTs remains limited by disciplinary siloes. That is, the concept is in-
terdisciplinary, but we lack summary assessments of existing knowledge
about or common definitions used in relation to HRT across each individual
discipline. Nor do we have a sense of which factors or team member char-
acteristics inform the ways of working and outcomes of such HRTs. With this
review, we attempt to systematically synchronize extant definitions and detail
prior research on HRTs according to its theoretical perspectives, empirical
design, and major findings.
We focus on embodied robots, which we define as physical repre-
sentations of AI in a physical world that recognize their environment and
can interact with it (Bradshaw et al., 2009; Fong et al., 2003; High-Level
Expert Group on Artificial Intelligence, 2019; Wolf & Stock-Homburg,
2021).2 For the review, we conducted online searches using Google
Scholar and EBSCO but also reviewed journals and conference pro-
ceedings related to human–robot interactions. As detailed in
Supplementary Appendix A, we searched 17 conferences and 40 journals.
Most of them come from the fields of HRI, robotics, and computer science.
We manually assessed each study type, embodiment form, robot level,
focus topic, and team size and applied various related exclusion criteria.
Ultimately, we reviewed 150 relevant studies, published between 1990 and
May 2020 (for further details on the study selection, see Figure 1 and
Supplementary Appendix A). This review attempts to provide answers on
two questions:
1668 Group & Organization Management 48(6)
Figure 1. Overview of reviewed, included, and excluded studies. Note: (1) Please
see Supplementary Appendix A for more details on the exclusion criteria. (2) In total,
we reviewed 150 studies in detail. Details on the 24 studies considering dyadic task
teams can be found in Supplementary Appendix B.
Figure 2. Robot typology with selected examples from literature. Notes: Due to
anthropomorphism, robots can be attributed more prominent human (social)
characteristics than they originally were designed to include (see arrows). Picture
sources: Sociable Trash Box, Pepper, Johnny, TIAGo, Robonaut: all from ABOT
database (http://abotdatabase.info//); Care-o-bot: Fraunhofer IPA (https://www.care-
o-bot.de/de/care-o-bot-3/download/images.html); Roomba: iRobot (https://shop.
irobot.de/roomba-staubsstaubsaugerroboter-roomba-606/R606040.html); NIFTi
ground vehicle: Kruijff et al., 2014; Elenoide: leap in time GmbH Darmstadt.
• Machine-like robot with low social interaction: Robots like the Roomba
vacuum (Forlizzi & DiSalvo, 2006) or the NIFTi ground vehicle (Kruijff,
Kruijff-Korbayová, et al., 2014) are designed primarily with functionality
in mind.
• Human-like robot with low social interaction: Robots like Johnny 05
(SIM TU Darmstadt, 2021), TIAGo (Pages et al., 2016), and Robonaut
(Bluethmann et al., 2003) are humanoid robots with legs, arms, and
heads. Despite this physical appearance, these robots are designed pri-
marily to fulfill intended (work) tasks, not to engage in social interaction.
• Machine-like robot with high social interaction: Robots in this category,
like the Sociable Trash Box (Yamaji et al., 2011) and the Care-O-Bot
(Kittmann et al., 2015), lack a human physical appearance but can elicit
social responses (Schmitt et al., 2017).
1670 Group & Organization Management 48(6)
Team Typology
All-Human Teams. A common agreement defines human teams as collectives
of three or more people (Stock, 2003), “who (a) exist to perform organiza-
tionally relevant tasks, (b) share one or more common goals, (c) interact
socially, (d) exhibit task interdependencies . . . , (e) maintain and manage
boundaries, and (f) are embedded in an organizational context” (Kozlowski &
Bell, 2003, p. 334). They are dynamic, at three main levels (de Wit & Greer,
2008; DeChurch et al., 2013): “tasks (i.e., goals, ideas, and performance
strategies), . . . relationships (i.e., personality clashes, interpersonal styles)”
(DeChurch et al., 2013, p. 560), and the processes used to manage or achieve
teamwork (de Wit & Greer, 2008).
Table 1. Overview of Different Team Compositions, Sample Definitions, and Related Research.
Team
Team Type Composition Sample Definition Related Empirical Research
Human-directed “a single human operator can oversee and Management: Pina et al., 2008; Sellner et al.,
robot team flexibly intervene in the operation of a team 2006
of largely autonomous robots” (Sellner et al., Cognitive science: J. Wang et al., 2008; You &
2006, p. 1425). Robert, 2016, 2017, 2019a, 2019b
HRI: Alboul et al., 2008; Crandall et al., 2003;
Goodrich et al., 2007
Military: Brown et al., 2005
Robotics: Zheng et al., 2013
(Urban) search and rescue: Burke & Murphy,
2004, 2007; Kantor et al., 2006; Lee et al.,
2010; Ranzato & Vertesi, 2017; H. Wang
et al., 2010; Yazdani et al., 2016
Human-/Robot- “human workers . . . perform physical tasks in HRI: Law et al., 2020
directed mixed coordination with robotic partners” and Management: Gombolay, Gutierrez, et al.,
team “human and robot co-leaders [have] identical 2015, referring to human and robotic co-
functions and capabilities, by restricting the leads and human assistants; Gombolay,
human co-leaders’ capabilities such that they Huang, & Shah, 2015, referring to human
were the same as those of the robot” leader, robotic and human assistants
(Gombolay, Gutierrez, et al., 2015, pp. 295–
296)
(continued)
Group & Organization Management 48(6)
Table 1. (continued)
Team
Team Type Composition Sample Definition Related Empirical Research
Robot-directed “the partner [robot] . . . is instructing the N/A; the only studies with such a team
human team primary human . . . on the task steps to composition refer to robot-directed dyadic
complete. There are no shared decision task teams
making tasks” (Harriott et al., 2011, p. 46)a
Wolf and Stock-Homburg
Autonomous mixed “humans and robots [work] together to Cognitive science: Correia, Mascarenhas, et al.,
team accomplish complex team tasks” (Dias et al., 2019; Jung et al., 2015; Strohkorb Sebo et al.,
2008, p. 1) 2020; Traeger et al., 2020
HRI: Gervits et al., 2020; Kwon et al., 2019;
Tang & Parker, 2006
Robotics: Claure et al., 2020; Iqbal & Riek, 2017;
Marge et al., 2009
Space: Fong et al., 2005; Fong et al., 2006
(Urban) search and rescue: Dias et al., 2008; Jung
et al., 2013
Note: Team composition: s = human, = robot. The studies (with team sizes of at least n = 3) are categorized according to a best fit approach, so they
□
might feature aspects of more than one research discipline. Overview over related empirical research is not exhaustive.
a
Harriott et al. (2011) only consider a dyadic task team.
1673
1674 Group & Organization Management 48(6)
dyadic task teams in depth, instead see Supplementary Appendix B). From
a narrow perspective, combining the insights gleaned from all-human team
definitions (e.g., Stock, 2004) and robotic research, we define HRTs as
multiple-member collaborative teams.4
Proposed Framework
In this overview, we rely on an input–process–output (IPO) model (Gladstein,
1984; You & Robert, 2018b; see Figure 4). Categories 1 and 2 focus on two
important input factors: intra-member team characteristics, such as the
(physical) robot design, robot behavior, or human preferences and behavior
(Category 1), and inter-member team characteristics, including team com-
position, autonomy, and leadership (Category 2). Category 3 includes studies
of team processes (Barrick et al., 1998; Gladstein, 1984) like (physical)
coordination, communication, collaboration, and trust. The studies in these
categories affect team outputs (Barrick et al., 1998) as “psychological and
business-related outcomes produced by teams” (Stock, 2004, p. 277). Studies
in Category 4 investigate moderating effects on input, process, and output.
Finally, some studies depict causal chains (Stock, 2004) from the inputs
through mediators to outputs (Category 5). The coding scheme used to
classify studies is explained in detail in Supplementary Appendix A.
Author/Subcategory/Disciplinea/
Team Interactionb Key Findingsc,d
Ambrose et al. (2000)/(physical) • Overview of the design of NASA’s Robonaut
robot design/VI/T
Bluethmann et al. (2003)/ • Information on the design of NASA’s Robonaut
(physical) robot design/VI/T
Fong et al. (2005)/(physical) robot • Proposal of interaction framework “Human–
design/VI/T Robot Interaction Operating System” (HRI/OS)
• Proposal of metrics for evaluation of HRTs
Kelly and Watts (2017)/robot • Position paper that suggests that task-related
behavior/V/T+S “inefficiency” in the form of social behavior
should be considered when designing social
robots
Table 3. Empirical Studies on Multiple-Member HRTs Related to Intra-Member Team Characteristics and Their Effects.
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Claure et al.(2020)/ n.i./↔/n.i. T+S n = 282/156 f, 124 m, 1 Equity models, • Robot fairness • User trust (+ /n.s.) • "Fairness of resource
robot behavior/V other, 1 not fairness • Perceived robot fairness allocation has significant
disclosed; age: M = theory (n.s.) effect on user’s trust in the
36 years (SD = 11); system” (p. 299)
mTurk; from the US/
C
Correia, Mascarenhas, Humanoid (EMYS T+S n = 70/32 f, 37 m, 1 n.i. • Prosocial robot • Perceived robot social • Prosocial robots are rated
et al. (2019)/robot robotic head)/ unknown; age: range behavior attributes (+) more positively in terms of
behavior/IV ↔/Physical robot 22–62 years (M = their social attributes
34.6, SD = 11.557)/ (p.143)
C • "The perception of
competence, the
responsibility attribution
(blame/credit) and the
preference for a future
partner are only
significantly different in the
losing condition” (p.143)
(continued)
Group & Organization Management 48(6)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Correia, Petisca, et al. Humanoid(“Emys”, T+S n_1 = 30, n_2 = 61/1: Learning goal • Robot goal • Competitiveness Index • "When a partner is chosen
(2019)/human “Glin”; both identical 17m, age: range 19– theory orientation (higher for without previous
preferences and physical appearance: 42 years, M = 23.03 (performance- performance-driven) partnering experience,
behavior/IV EMYS)/↔/Physical (SD = 4.21), driven vs. learning- • McGill Friendship people tend to prefer
robot university students; driven) Questionnaire (higher robots with relationship-
2: 38 m, age: range for learning-driven) driven characteristics as
Wolf and Stock-Homburg
(continued)
1677
Table 3. (continued)
1678
Fraune et al. (2020)/ Functional (Sociable T+S n_1 = 630, n_2 = 71/1: Social identity • Robot behavior • Anthropomorphism of • Social robot-robot behavior
robot behavior/I Trash Box (STB))/ from USA (n = 333, theory toward robots robot (partially + for increases
n.i./Image/video of 47% f, age M = (none, social, robot–robot social, n.s. anthropomorphism, social
a robot, physical 24.59, SD = 9.59) functional) for other conditions) robot-human behavior
robot and Japan (n = 297, • Robot behavior • Emotional and behavioral increases positive emotions
7% f, age M = 21.55, toward humans intention about robot and willingness for
SD = 3.35), (social, functional) (n.s.) interactions (p.1)
recruited in • Country (US, Japan) • Entativity of robot (n.s.) • Robots that are designed for
universities; 2: from positive human interaction
USA (42% f, age M = • Robot behavior • Cooperation (n.s.) resp. to be perceived
19.20, SD = 1.30), toward robot • Anthropomorphism of intelligent should behave
recruited from (social, functional) robot (partially + or - socially towards humans
university/C • Robot behavior for robot-robot social) resp. also towards robots
toward human • Emotional and behavioral (p.1)
(social, functional) intention about robot (+
for robot-human social)
• Entativity of robot
(partially + for robot-
human functional)
Gombolay et al. Functional (Willow T+S n_1 = 17, n_2 = 18, Situational • Degree of robotic • Situation awareness ( ) • “human participants’
(2017)/robotic Garage PR2 n_3 = 20/all: awareness autonomy in awareness of their team’s
behavior, human platform)/↓/Physical recruited form local scheduling decisions actions decreased as the
preferences and robot university; 1: 6 m, degree of robot autonomy
behavior/I age: range 18–25 • Degree to which • Preference to work with increased” (p. 614)
years, M = 19.5 (SD participant’s robot (+) • “participants preferred
= 1.95); 2: 10 m, age: preferences are working with a robot that
range 19–45 years, respected by included their preferences
M = 27 (SD = 7); 3: robotic teammate when scheduling and . . .
10 m, age: range 18– • Degree to which • Preference to work with preferred working with
30 years, M = 21 participant’s robot (+, +) a robot that utilized them
(SD = 3)/C preferences are more frequently” (p. 613)
respected by
robotic teammate
• Participant utilization
Group & Organization Management 48(6)
(continued)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Gombolay, Huang, Functional (Willow T+S n = 17/n.i./C n.i. • Consideration of • Willingness to work (+) • Humans prefer working
and Shah (2015)/ Garage PR2 human with a robotic team mate
robot behavior, platform)/↓/Physical preferences that considers their
human preferences robot preferences
and behavior/II • Team efficiency has to be
kept in mind when
Wolf and Stock-Homburg
allocating decision-making
authority (robot taking
decisions can lead to
decreased efficiency and
belief that the robot is
unaware of team goals)
Jiang and Wang n.i./n.i./n.i. T+S n.i./n.i./n.i. Regret theory • Robot decision • Teaming performance (+) • More human-like decision-
(2019)/robot making (regret- making by robots can help
behavior/V decision model) to balance workload and
performance in HRTs
(continued)
1679
Table 3. (continued)
1680
Law et al. (2020)/ Humanoid (Willow T+S n_1 = 198, n_2 = 421/ Emotional • Robot emotional • Trust in robot • Robotic EI influences trust in
(physical) robot Garage PR2)/ 1: 95 f, 1 other, age: intelligence, intelligence (+) a robot (p. 1)
design, robot ↔/Image/video of range 18–77 years social role • Robot gender (+, • "Gender stereotypical
behavior/I robots (M = 34.96, SD = theory male) expectations related to EI
11.47); 2: 162 f, 3 • Vignette [are] transferred to trust”
other, age: range 18– presentation (n.s.) (p. 1)
81 years (M = 36.52,
SD = 11.85); both: • Robot emotional
mTurk/C intelligence (+)
• Robot gender (+,
male)
• Vignette
presentation (+,
text)
• Participant gender
(n.s.)
• Participant age ( )
(continued)
Table 3. (continued)
Robot Morphologyb/ Data Basis/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Lei and Rau (2020)/ Humanoid (Nao)/ T+S n = 60/30 f; age: M = CASA • Task outcome (n.s.) • Attribution of blame to • Gender effects play a role in
human preferences ↔/Physical robot 22.2 years (SD = paradigm, • Human gender robot the attribution of credit and
and behavior/IV 2.29); (under-) common ( /+) • Attribution of credit to blame to robot team
graduate students sense robot members
(40%/60%)/C psychology, • "participants attributed
gender more credit and less blame
Wolf and Stock-Homburg
(continued)
1681
Table 3. (continued)
1682
Traeger et al. (2020)/ Humanoid (Nao)/ T+S n = 153 (in 51 groups n.i. • Robot vulnerability • Team member • "people in groups with
robot behavior/IV ↔/Physical robot of 3 each)/vulnerable interactions with other a robot making vulnerable
condition: 28 f, 26 m, human team members statements converse
age: M = 20.13 years (+) substantially more with
(SD = 7.13); neutral • Total talking time (+) each other, distribute their
condition: 36 f, 15 m, • Team perception (+) conversation somewhat
age: M = 21.33 years • Conversation equality (+) more equally, and perceive
(SD = 11.01); silent their groups more
condition: 31 f, 17 m, positively compared to
age: M = 23.94 years control groups with
(SD = 7.36)/C a robot that either makes
neutral statements or no
statements at the end of
each round” (p. 6370)
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n. i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.
□
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1683
Table 4. Empirical Studies on Dyadic HRTs Related to Intra-Member Team Characteristics and Their Effects.
Robot Morphologyb/ Team
Author/Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/ Underlying Independent Dependent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Variable(s)h Key Findings
Arnold and Scheutz Humanoid/↔/Image/ T+S n = 332/135 f, age: 44.43 n.i. • Positive robot • Perceived robot • Robot touch leads to better
(2018)/robot video of a robot years; mTurk; US attitude capability rating of the social
behavior/I citizens/C • Robot touch (+/n.s.) performance, skills, fairness of
• Confidence in a robot
robot skills (+) • However, gender effects from
• Perceived robot survey responses show that
qualification (+ robot touch has to be
/n.s.) considered with caution as the
• Perceived robot context and expectations from
fairness (+) society can lead to
a significantly varying
perception of robot touch
Bartneck et al. (2006)/ Android (Tron-X, T+S n = 12/age: range 21–54 CASA • Human-/animal- • Praise (+) • The study results lead to the
(physical) robot PKD), animal-like years (M = 29.9); paradigm, likeness of robot • Punishment ( ) conclusion that the CASA
design/I (AIBO)/↔/Image/ Masters’s and Ph.D. Uncanny paradigm holds true for
video of a robot students in valley computers
Psychology or paradigm • Robots on the other hand were
Engineering/within treated differently depending
subject design, C on their physical appearance:
very human-like or animal-like
robots were praised more and
punished less than computer
and human, machine-like
robots were treated like
computer and human
(continued)
Group & Organization Management 48(6)
Table 4. (continued)
Robot Morphologyb/ Team
Author/Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/ Underlying Independent Dependent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Variable(s)h Key Findings
Hiatt et al. (2011)/ Humanoid (Mobile, T+S n = 35 / n.i. / n.i. Theory of mind • Robot explanation • Perceived robot • A robot that uses a theory of
robot behavior/IV Dexterous, Social naturalness (+) mind (ToM) approach and
(MDS) robot)/ • Perceived robot offers explanations is perceived
↔/Image/video of intelligence (+) both more intelligent and
a robot natural than a robot that either
shows only simple correction
Wolf and Stock-Homburg
(continued)
Table 4. (continued)
1686
N. Wang et al. (2016b)/ Functional/n.i./ T+S n = 220/mTurk, USA/C n.i. • Robot explanations • Transparency • A better understanding of
robot behavior/III Simulation/virtual (between-subject) (+) decision-making processes of
robot • Trust (+) a robot can help improve trust
• Performance (+) in HRTs (similar experiment as
in “The impact of POMDP-
generated explanations on
trust and performance in
human-robot teams")
N. Wang et al. (2018)/ Animal-like, T+S n = 61/14 f; age: range n.i. • Embodiment (n.s.) • Trust • Explanations by robots (even if
robot behavior/IV functional/ 18–23 (M = 19.2); • Communication • Transparency they don’t indicate which
↓/Simulation/ years higher- strategy in case of • Transparency components of a robot are
virtual robot education military error (n.s.) test score faulty) have significant effects
(online HRI test school in the US, • Explanations (+) • Compliance on transparency and self-
bed) participants received • No. of correct reported trust of participants
extra course credit decisions made and result in better decision-
for participation/C (2 making of a human team mate
sessions, 120 mins • Robot embodiment and
total, 8 missions) acknowledgement of mistakes
only have a marginally or no
significant impact on self-
reported trust, transparency or
correct decisions
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, □= robot.
f
Participants: f = female, m = male.
g
Group & Organization Management 48(6)
Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d
Abrams & der Pütten In-group identification (e.g., • In-group identification,
(2020)/team social identity theory); cohesion, and entitativity (I-
perceptions/IV/T+S cohesion theories (e.g., C-E) framework can be
group development used as a theoretical basis
theory); entativity theory for research on human–
(e.g., formation of robot groups
perceived entativity) • Multi-agent groups are
similar but not the same as
all-human groups
• Dyads have unique processes
that differ from group and
team processes
Bradshaw et al. (2012)/ n.i. • Autonomy and coordination
autonomy and in human-agent-robot
control/III/T teamwork should be in the
focus of future research to
solve current problems
Dudenhoeffer et al. Shared mental models, • Simulations are widely used
(2001)/autonomy situational awareness in HRT and HRI research
and control/III/T+S and can help to gain insights
into this field, esp. when
many robots are involved
Gladden (2014)/ French and Raven’s bases of • Charismatic robotic leaders
leaderhip/I/T+S power (w/charismatic authority
being a manifestation of
referent power) will
probably emerge naturally
• Introduction of three
possible ways of charismatic
robotic leaders
Groom and Nass Shared mental models • Robots should be evaluated
(2007)/roles of as complements to human
humans and robots/ team members (rather than
III/T+S duplicates) to take
advantage of individual
abilities of humans and
robots
(continued)
Wolf and Stock-Homburg 1689
Table 5. (continued)
Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d
Table 5. (continued)
Author/Subcategory/
Disciplinea/Team
Interactionb Underlying Theories Key Findingsc,d
robots (Burke & Murphy, 2004), how to include humans in HRTs (Strohkorb
Sebo et al., 2020), consequences of the presence of robots on human decision-
making (Fuse & Tokumaru, 2020), and the optimal organizational structure
(Ranzato & Vertesi, 2017). Findings from these studies indicate that, inter alia,
loosely coupled teams are most successful (Ranzato & Vertesi, 2017) and
specialized interaction roles might impede the inclusion of human team
members in HRTs (Strohkorb Sebo et al., 2020).
Investigations of autonomy and control in both dyadic and multiple-
member HRTs either take a general view on the effects on teamwork
(Bradshaw et al., 2012) or a more specific focus on adjustable autonomy
(Sierhuis et al., 2003), in both conceptual and empirical efforts (e.g., Dias
et al., 2008; Gombolay, Gutierrez, et al., 2015; Goodrich et al., 2007; Sellner
et al., 2006). Findings indicate that somewhat autonomous robots and shared
control can facilitate the work of human team members and make HRTs more
efficient (e.g., Lee et al., 2010; Lewis et al., 2010; Sellner et al., 2006).
Researchers also have proposed an algorithm to predict team performance,
based on the robot’s performance in interaction with human team members or
when it is autonomous (Crandall et al., 2003), as well as various control
approaches for human-directed robot teams (Musić et al., 2019; Musić &
Wolf and Stock-Homburg 1691
Author/Subcategory/
Disciplinea/Team Underlying
Interactionb Theories Key Findingsc
Demir et al. (2020)/roles of Shared mental • “results indicate that effective team
humans and robots/VII/ models interaction and shared cognition play
n.i. an important role in human-robot
dyadic teaming performance.” (p. 1)
Hirche, 2018), a control framework for USAR (Yazdani et al., 2016), an “HRT
planning-execution framework” (Manikonda et al., 2007, p. 92), and a sim-
ulation framework that aims to support the development of command and
control architectures (Dudenhoeffer et al., 2001).
Conceptual studies of leadership in HRTs cite potential stereotypes of
robotic leaders (Gladden, 2014) or look into emotions evoked by, benefits
of, and possible modes of robotic leadership (Samani & Cheok, 2011).
These studies present robotic leadership as a future phenomenon and, in
some cases, argue that it will emerge naturally (Gladden, 2014). Empirical
studies also introduce a scalable, generalizable mathematical framework to
model leader and follower behaviors in multiple-member HRTs and show
that this framework enables robots to influence human teams (Kwon et al.,
2019).
Finally, team perceptions might take the form of shared mental models,
which have been predicted (e.g., Nikolaidis & Shah, 2012) and studied to
determine their influence on team performance (Gervits et al., 2020), which
appears positive. In addition, a conceptual framework of in-group identifi-
cation, cohesion, and entitativity relies on parallels with dynamics in all-
human teams (Abrams & der Pütten, 2020). Several studies investigate robots
as in-group members of multiple-member or dyadic HRTs and identify
positive effects on robot acceptance and anthropomorphization (e.g., Eyssel &
Kuchenbrandt, 2012; Fraune et al., 2017). Another related topic pertains to the
parallels between HRTs and human-animal teams, such as USAR teams that
include rescue dogs (e.g., Phillips et al., 2016). Arguably, human-animal
teams might provide models for developing HRTs.
1692
Table 7. Empirical Studies on Multiple-Member HRTs Related to Inter-member Team Characteristics and Their Effects.
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Burke and Murphy Functional/↓ (Inuktun T n = 33 (two field Shared mental • Operator situational • Task performance • "a minimum 2:1 human-to-robot
(2004)/roles of Micro Variable studies)/ models, awareness (+) ratio is required for effective
humans and Geometry Tracked experienced situational • Goal-oriented team robot-assisted technical search
robots/VII Vehicle [n = 32], firefighters seeking awareness communication (+) in USAR” (p. 307)
Inuktun Microtracs USAR certification/ • Goal-oriented team
robot [n = 1])/ C communication and a shared
Physical robot mental model of the search space
and the task lead to better task
performance
Crandall et al. n.i./↓/n.i. T n_1 = 13, n_2 = 23/ n.i. • n.i. • n.i. • Proposal of performance
(2003)/ n.i./C (six 5-minute prediction algorithm for HRTs
autonomy and sessions each)
control/I
Dias et al. (2008)/ Functional (Pioneer, T n.i./n.i./C (15 minutes Sliding autonomy • Sliding autonomy • Performance (+) • Challenges of enabling sliding
autonomy and Segway ER1)/↓, run) methodology autonomy in HRTs can be
control/VII ↔/Physical robot overcome by the presence of six
key capabilities (requesting help,
maintaining coordination,
situational awareness, granularity,
prioritization, learning)
Fraune et al. Functional (Mugbot)/ T+S n = 48/21 f/C Group theory, • Group (ingroup, • Liking (higher for in-group • "participants favored the ingroup
(2017)/team ↔/Physical robot social identity outgroup) and humans in most over the outgroup, and humans
perceptions/IV competing teams • Agent (human, cases) over robots. Group had a greater
robot) • Anthropomorphism effect than Agent, so participants
(higher for ingroup in all preferred ingroup robots to
cases) outgroup humans.” (p. 1432)
Fuse and Tokumaru Humanoid T+S n = 14/Japanese n.i. • Presence of robot • Change in answers given (+ • "robots attempt to comply with
(2020)/roles of (RoBoHoN)/n.i./ university students/ considering group for change between round a group norm affects human’s
humans and Physical robot C (5 rounds) norms (vs. no robot) 1 and 2, for others n.s.) decision-making” (p. 56081)
robots/IV
(continued)
Group & Organization Management 48(6)
Table 7. (continued)
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Gervits et al. Humanoid (PR2 by T n = 26 (from 36 Shared mental • Robot shared mental • Performance (+) • Shared mental models help to
(2020)/team Willow Garage)/ originally recruited)/ models models improve performance and
perceptions/I ↔/Simulation/virtual 19 m; age: M = 24.9 efficiency of HRTs
robot years (SD = 8.6);
from University
campus/C
Wolf and Stock-Homburg
Gombolay, Functional/ T+S n = 24/14 m, 10 f; age: n.i. • Presence of robot • Team efficiency (+/+) • “an autonomous robot can
Gutierrez, et al. ↔,↑/Physical robot range 20–42 (mean • Robot decision- • Perceived likeability, outperform a human worker in the
(2015)/ age of 27±7 years); making authority appreciation, and allocation of part of or all tasks that
autonomy and recruited via email understanding of co- have to be completed” (p. 293)
control/II and around leader ( /+) • People prefer to give control
a university campus/ authority to the robot
C • "People value human teammates
more than robotic teammates,
however, providing robots
authority over team coordination
more strongly improves their
perceived value compared to
giving similar authority to
a human team mate” (p. 293)
• People tend to “assign
a disproportionate amount of
work to themselves when working
with a robot (. . .) rather than
human team mates only” (p.293)
Goodrich et al. n.i./↓/Simulation/virtual T n = 80 (in four n.i. • Attention • Individual and team • Individual and team autonomy
(2007)/ robot; physical experiments with management aids (+) autonomy benefit from adjustable and
autonomy and robot 16, 23, 11, 30 • Adaptive autonomy adaptive autonomy
control/I participants resp.)/ (+) • Adjusting autonomy should also
n.a./C • Information allow for shifting between
abstraction (+) management styles
(continued)
1693
Table 7. (continued)
1694
Kwon et al. n.i./↔, ↓, ↑/No T n.i./n.i./n.i. Adaptive • Robot intervention • Leadership scores (+) • Leader-follower graphs enable
(2019)/ embodiment leadership (use of leader- • Task execution time (+) robots to influence human teams
leadership/I theory follower graph) • Success rate (+) through “redirect[ion of]
a leader-follower relationship,
distract[ion of the] team, or lead
[ing of] a team towards the
optimal goal” (p. 2)
Lee et al. (2010)/ Functional (Pioneer P2- T n = 120 (in 60 teams)/ n.i. • Robot autonomy (+) • System performance • "Automating path planning
autonomy and AT robots)/ University of • Team organization improved system performance.
control/VII ↓/Simulation/virtual Pittsburgh (+/ ) Effects of team organization
robot (USARSim community, paid, were equivocal.” (p. 438)
robotic simulation) no previous
experience with
robot control/C
Lewis et al. (2010)/ Functional (Pioneer P2- T n = 120 (in 60 teams)/ n.i. • Robot autonomy (+) • System performance • Automation of path planning in
autonomy and AT)/↓/Simulation University of • Shared team USAR HRTs helps to improve
control/VII Pittsburgh authority (+) performance
community, paid, • "effects of team organization
no previous favored operator teams who
experience with shared authority for the pool of
robot control/C robots” (p. 1617)
Musić et al. (2019)/ Functional (KUKA T n = 48/12 f/C n.i. • Type of feedback (no • Task performance (n.s./+) • Proposal of control architecture
autonomy and LWR 4+)/↔, (experiment was vs. binary vs. for HRTs
control/V ↓/Physical robot performed 10 relative) • Feedback through wearable
times/participant) fingertip devices helps to
increase performance
Ranzato and Functional/↓/Physical T+S n = 30 (6 teams of 5 n.i. • Team organizational • Efficiency (+) • Loosely coupled teams were
Vertesi (2017)/ robot (remote!) each with 3:2 structure (loose) • Communication (+) found to be the most successful
roles of humans gender ratio)/n.i./C • Teammate trust (+) compared to tightly coupled
and robots/VII hierarchical and consensus
groups
(continued)
Group & Organization Management 48(6)
Table 7. (continued)
Author/ Robot Morphologyb/ Team Data Basis/
Subcategory/ Robot Levelc/Type of Interactiond/Team Participantsf/Time Underlying Independent
Disciplinea Embodiment Setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Sellner et al. Functional (Roving Eye, T n_1 = 2, n_2 = 32/1: Situational • Autonomy • Time to completion ( ) • Robots purposefully asking for
(2006)/ Mobile Manipulator, expert users of the awareness, • Success rate (+) help result in more efficient and
autonomy and Crane)/↓/Physical robotic system; 2: concept of • Human workload ( ) robust systems and enable
control/I robot n.i./C sliding human operators to gain
autonomy • Autonomy • Average response time situational awareness
• Extent of ( /+)
Wolf and Stock-Homburg
information
provision
Strohkorb Sebo Humanoid (Jibo)/↔ T+S/ n = 78 (in 26 teams)/ Social identity • Specialized robot • Human inclusion • “specialized roles may hinder
et al. (2020)/ (not specified)/ 38 f; age: M = 16.82 theory liaison ( ) human team member inclusion,
roles of humans Physical robot years (SD = 0.72); • Robot supportive whereas supportive robot
and robots/I from high school utterances (+ /n.s.) utterances show promise in
program held at encouraging contributions from
Yale University/C individuals who feel excluded.”
(p. 309)
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.
□
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: (-) = negative effect, (+) = positive effect, (n.s.) = not significant.
1695
1696 Group & Organization Management 48(6)
Limitations. Some empirical studies rely on small sample sizes of less than 30
participants (e.g., Gombolay, Gutierrez, et al., 2015; Ranzato & Vertesi,
2017), and many feature quite young participants. The limitations of labo-
ratory and cross-sectional studies, as detailed for Category 1, also hold for the
studies in Category 2. We find slightly more variability in the considered team
setups, but further research could broaden the considered constellations. Most
studies in this category already leverage theoretical bases though.
Eyssel and Humanoid/ T+S / n.i. n = 78/German Social • Robot in in- • Warmth (+) • Participants “rated the in-group
Wolf and Stock-Homburg
Kuchenbrandt ↔/Video/ university identity group (vs. • Mind attribution (+) robot more favourably . . .
(2012)/team image of robot students, 37 m, 40 theory out-group) • Psychological closeness [and] also anthropomorphized
perceptions/I f; age: M = 23.27 (+) it more strongly than the out-
(SD = 3.29)/C • Contact intentions (+) group robot” (p. 724)
• Design preference (+)
Kuchenbrandt Humanoid T+S / n.i. n = 45/25 m, 18 f, Social • Robot in in- • Implicit • "Perceived in-group
et al. (2013)/ (Nao)/n.i./ age: M = 24.81 identity group (vs. anthropomorphization of membership with the robot
team Physical robot years (SD = 5.00), out-group) robot (+) resulted in a greater extent of
perceptions/IV German university • Explicit anthropomorphic inferences
students/C anthropomorphization of about the robot and more
robot (+) positive evaluations.” (p. 409)
• Acceptance of robot (+) • Additionally, participants with
• General willingness to the robot in their in-group
interact with robot (+) “showed greater willingness to
interact with robots in
general.” (p. 409)
(continued)
1697
Table 8. (continued)
1698
Robot
Morphologyb/
Author/ Robot Levelc,d/ Team Data Basis/
Subcategory/ Type of Interactiond/ Participantsf/Time Underlying Independent
Disciplinea Embodiment Team setupe Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Marble et al. Functional/ T+S n = 11/1 f, 10 m; 4 n.i. • Dynamic • Target detection (+) • Autonomy of a robot should be
(2004)/ ↓/Physical expert users, 7 no robot Situation awareness (+) adjustable to allow for
autonomy and robot or some prior autonomy situation awareness and task
control/VII experience; INEEL completion
employees/L (4 • Participants varied greatly in
sessions in direct their ability to trust a robot
succession) (i.e., allow autonomy)
• Performance benefits from
practice
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, □ = robot.
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1699
Nikolaidis et al., 2015; You & Robert, 2016), coordination strategies and
frameworks (Iqbal et al., 2016; Shah et al., 2011; H. Wang et al., 2010), and
automated cooperation (Gao et al., 2012; J. Wang et al., 2008) all note positive
effects, such as on team fluency, perceived robot trustworthiness, or team
performance in HRTs.
Research on communication in HRTs for multiple-member HRTs identifies
information flows supported by sensor networks (Kantor et al., 2006), uses of
“Human–Robot Interaction Operating Systems” (Fong et al., 2006), back-
channeling (Jung et al., 2013), real versus simulated videos (Canning et al.,
2014), and conflict moderation through robots (Jung et al., 2015). For dyadic
and multiple-member HRTs verbal versus non-verbal communication (e.g.,
Breazeal et al., 2005; Ciocirlan et al., 2019; Nikolaidis et al., 2018; Williams
et al., 2015), based on partner’s knowledge and behavior (Lo et al., 2020) was
examined. These studies mostly reveal positive effects of (extensive) com-
munication in HRTs (Tables 9–12). Another research pathway involves
communication interfaces (Marge et al., 2009), communication models (e.g.,
Kruijff, Janı́ček, et al., 2014; Nakano & Goodrich, 2015), and the design/
implementation of HRT for conversational HRI (Zheng et al., 2013).
Research into collaboration in HRTs refers to linkages explicitly estab-
lished in an HRT context, which should not be confused with the broader topic
of HRC. Conceptual studies range in focus, including the optimal setup of
“hybrid teams” with robots, virtual agents, and humans as team members
(Schwartz et al., 2016), collaboration challenges (Fiore et al., 2011), col-
laborative tools (Bruemmer & Walton, 2003), the development of collabo-
rative robotic teammates (Hayes & Scassellati, 2014), dynamic peer-to-peer
teaming (Tang & Parker, 2006), task-oriented collaboration with semantic-
based path planning (Yi & Goodrich, 2014), decision-making (Stewart et al.,
2012), and mutual initiatives (Bruemmer et al., 2002). Researchers also
examined collaboration frameworks (e.g., Hoffman & Breazeal, 2004; Marble
et al., 2003) for dyadic HRTs and a framework of joint action perception (Iqbal
et al., 2015) for multiple-member HRT. Finally, for trust, three studies of
dyadic HRTs discuss and examine the impact of appropriate trust (i.e.,
beneficial for team performance) (Chen et al., 2020; Ososky et al., 2013) and
its measurement (Freedy et al., 2007).
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Alboul et al. (2008)/ n.i. / ↓ / Physical robot T n.i. • Proposal of theoretical framework for
(physical) navigation in HRTs
coordination/I
Bradshaw et al. n.i. / n.i. / n.i. T+S / n.i. Coordination • Coordination in human-agent-robot
(2009)/(physical) theory teams as an essential ingredient of
coordination/I joint activities: Fulfillment of
teamwork model and resulting
expectations towards
communication (towards leader and
colleagues) will allow robots to be
seen as team mates
Brown et al. (2005)/ n.i. / ↓ / n.i. T n.i. • Proposal of reference framework for
(physical) HRTs
coordination/III
Bruemmer et al. Functional (augmented T / n.i. Role theory, shared • Proposal of a framework for mutual-
(2002)/ ATRVJR)/↓, (↔, mental models initiative in HRTs
collaboration/III ↑)/Physical robot
Bruemmer and Functional (augmented T / n.i. Shared mental • Discussion of approach for control
Walton (2003)/ ATRVJR) / n.i. / n.i. models architecture for human–robot teams
collaboration/III in a military context
(continued)
Group & Organization Management 48(6)
Table 9. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Fiore et al. (2011)/ n.i. / n.i. / n.i. T+S / n.i. n.i. • Successful interactions in HRTs are
collaboration/III based on organizational (and
corresponding roles), social, and
Wolf and Stock-Homburg
cultural models
• Research has to work on gaining
insights into how robots fit into such
models and how they can
understand organizational, social,
and cultural factors
Hayes and Scassellati n.i. / n.i. / n.i. T+S / n.i. n.i. • Proposal of four research questions
(2014)/ on collaboration in HRTs
collaboration/I
Kruijff, Janı́ček, et al. Functional (“Generaal”, T + S Situational • Proposal and validation of “user-
(2014)/ P3-AT; NIFTi UGV awareness centric design methodology in
communication/VII and UAV)/↓/Physical developing systems for human-robot
robot teaming in Urban Search and
Rescue” (p. 1)
• Robot acceptance is important
Kruijff et al. (2012)/ Functional/↓/Physical T Situational • Proposal of experience and
communication/III robot awareness communication model to support
shares human–robot activities
(continued)
1701
Table 9. (continued)
1702
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Kruijff-Korbayová Functional (NIFTi UGV T Situational • Description of the project “TRADR:
et al. (2015)/ and UAV)/↓/Physical awareness long-term human-robot teaming for
communication/IV robot robot assisted disaster response” (p.
193) and the user-centric design
approach that is used
Nakano and Goodrich n.i./n.i./n.i. n.i. / n.i. n.i. • Proposal of “new interface concept,
(2015)/ a Graphical Narrative Interface
communication/V (GNI)" (p. 634)
• "We hypothesize that the GNI allows
users to search and analyze
spatiotemporal information more
easily and quickly than a typical GUI.”
(p. 634)
Nourbakhsh et al. n.i./n.i./n.i. T / n.i. n.i. • Proposal of an agent-based
(2005)/ “architecture for Urban Search and
communicatin/VII Rescue and a methodology for
mixing real-world and simulation-
based testing” (p. 72)
Schwartz et al. (2016)/ Humanoid (Aila), n.i. / n.i. n.i. • Discussion of setup of teams with
collaboration/I functional (Artemis, robots, virtual agents and humans as
Compi)/n.i./n.i. team members (“hybrid teams")
Stewart et al. (2012)/ n.i./n.i./n.i. n.i. / n.i. Decision theory • Proposal of decision-making model
collaboration/IV for HRTs
Group & Organization Management 48(6)
(continued)
Table 9. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Underlying
Disciplinea Embodiment Team Setupe Theories Key Findings
Tang and Parker n.i./↔/n.i. T / n.i. Information • Proposal of human–robot teaming
(2006)/ invariance approach ASyMTRe, dealing “with
collaboration/I theory, schema the issue of how to organize robots
Wolf and Stock-Homburg
Note. aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, □ = robot.
1703
1704 Group & Organization Management 48(6)
two runs over a 2-day period though. We find simulation studies (H. Wang
et al., 2010), laboratory studies (You & Robert, 2016), and field experiments
(Burke & Murphy, 2007). Studies of multiple-member HRTs mostly focus on
task interactions, and the setups include human-directed robot teams (14) or
autonomous mixed teams (9 studies) or else do not specify. With dyadic HRTs,
researchers examine autonomous human–robot pairings (12), human-directed
robots (4), or do not disclose their setups (4).
In this category, almost half of the studies report theoretical considerations.
These range from coordination theory (Malone & Crowston, 1990), to role
theory (Braga, 1972), and social signaling and back-channeling (Dennis &
Kinney, 1998). Again, multiple studies rely on shared mental models (Rouse
& Morris, 1986). Another popular theory is situational awareness, which is the
basis for various studies as detailed in Tables 9–12.
Limitations. The study samples tend to be small and young; and many studies
use laboratory settings to conduct cross-sectional experiments. Social robots
are underrepresented relative to functional robots, despite being developed
specifically to interact with people (Kirby et al., 2010), implying their par-
ticular suitability for HRTs. With regard to the theoretical basis, we see
potential for studies to strengthen the theoretical soundness of the examined
phenomena by integrating behavioral theories.
Breazeal, Brooks, et al. Humanoid (Leonardo)/ T+S Collaborative • The authors follow a perspective “of a balanced partnership
(2004)/collaboration/I ↔/Physical robot discourse where the human and robot maintain and work together on
theory, joint shared task goals” (p. 270)
intention • Paper gives an overview of the different robotic features of the
Wolf and Stock-Homburg
theory robot
Breazeal, Hoffman, and Humanoid (Leonardo)/ T+S Collaborative • Presentation of approach for collaborative human–robot
Lockerd (2004)/ ↔/Physical robot discourse teamwork
collaboration/I theory, joint
intention
theory
Oh et al. (2015)/(physical) n.i./n.i./n.i. (no robot involved T+S / n.i. n.i. • Proposal and validation of model for indirect perception in HRTs
coordination/V in experiments)
Ososky et al. (2013)/trust/ n.i./↔/Physical robot T+S Shared mental • Trust in HRTs should not simply be maximized, the goal should be
IV models to have appropriate trust (both in intention and ability)
Shah and Breazeal (2010)/ n.i./n.i./n.i. (no robot involved T+S / n.i. Shared mental • Implicit and explicit communication in HHT give insights into how
(physical) coordination/ in experiments) models robots in HRTs could act
IV • "a robot should respond to communications differently,
depending on whether they are implicit, explicit, verbal only,
nonverbal only (gesture), or combined.” (p. 244)
Visser et al.(2020)/trust/IV n.i./↔/n.i. T+S / n.i. Theory of mind, • Proposal of human–robot team trust model that has a longitudinal
trust theories perspective on the development and calibration of trust in HRTs
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
1705
e
Team setup: s = human, □ = robot.
1706 Group & Organization Management 48(6)
Burke and Murphy (2007)/ Functional (Inuktun T n = 62/90% m; majority Shared mental • Remote shared visual • Team performance • Remote shared visual presence
collaboration/VII Micro Variable between 35-54 years; models, presence (+) may help remote USAR HRTs
Geometry Tracked NASA USAR task-force situational • Visual contact (n.s.) “to perform as effectively as
Vehicle (VGTV) personnel/L (two runs á awareness collocated teams” (p. 161)
robot)/↓/Physical 20 minutes over 2-day
robot period; final n = 50 (#
teams completing both
Wolf and Stock-Homburg
runs))
Canning et al. (2014)/ Humanoid & T / 1: n_1= 24, n_2 = 137, n_3 n.i. • Video feed type (real • Task performance • Examination of robot perceptions
communication/I Functional (Xitone = 183/mTurk; 1: 12 f, vs. simulated) (n.s.) in remote team settings
Design MDS, age: range 18–31 years, • Perceived • "realism of the [video]feed
Willow Garage M = 20.88 (SD = 2.59), collaboration (+ for becomes important when the
PR2, VGo, iRobot 2 & 3: all right-handed, fluent real video) human teammate knows about
Create)/ in English; 2: 48 f, age: • Perceived utility (+ for the robot’s appearance and they
↓/Simulation/ range: 18–60 years, real video) work together on a task” (p.
virtual robot, median = 31, US • Video feed type (real • Task performance 4361)
Video/image of residents; 3: 91 f, age: vs. simulated) (n.s.) • See study for details on results
robot range: 18–60 years, • Introduction of • Perceived and interaction effects of study 3
median = 31, US robot collaboration (n.s.)
residents/C • Perceived utility (n.s.)
• Perceived competence
(n.s.)
• Perceived warmth
(n.s.)
Fong et al. (2006)/ Functional, humanoid T n.i./n.i./C n.i. • Reliability of robots • Productivity (amount • Software frameworks are being
communication/VI (K10 rover, (independence, as of useful work, developed (e.g., HRI/OS) to
Robonaut)/↓, a result of exposure time in allow for effective work of
↔/Physical robot understanding of space) (+) humans and robots
communication)
Gao et al. (2012)/(physical) n.i./↓/Simulation/ T n = 48/19 f; age: range n.i. • Team structure • Task performance • Automated search guidance
coordination/VII virtual robot 19–47 years, M = 26.6 (pooled, sector) (n.s.) neither increased nor
(USARsim) (SD = 5.5); 33 of them • Search guidance (no, • Task completion time decreased performance” (p. 81)
students/C suggestion, (- for suggested • Search guidance decreased
enforced) guidance in sector average task completion time in
teams; n.s. for other Sector teams” (p. 81)
conditions) • "pooled teams experienced
• Subjective workload (- lower subjective workload than
1707
(continued)
Table 11. (continued)
1708
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Iqbal et al. (2015)/ Functional T n = 2/n.i./n.i. n.i. • n.i. • n.i. • Proposal of an event-based
collaboration/V (Turtlebot)/ model to enable robotic action
↔/Physical robot perception in HRTs
Iqbal et al. (2016)/(physical) Functional T / pilot: n_pilot = 7, n_main = 27 n.i. • Robot movement • Synchronization (+) • Proposal and validation of
coordination/V (turtlebot)/ (in 9 groups)/pilot: 3 f; based on • Robot Timing “approach to enable robots to
↔/Physical robot main: 14 f, age: M = synchronization- Appropriateness (+) perceive human group motion
main: 22.93 (SD = 3.98), index based in real time to anticipate future
mainly students/C anticipation (vs. actions and synthesize their own
based on event motion accordingly " (p. 909)
cluster-based
• “the robot performs better
anticipation) when it has an understanding of
high-level group behavior than
when it does not” (p. 909)
Iqbal and Riek (2017)/ Functional T n= 18 (in 6 groups)/11 f; n.i. • n.i. • n.i. • "results might suggest that an
(physical) (turtlebot)/↔ age: M = 24.7 years addition of a robot with
coordination/V /Physical robot (SD = 4.5); undergrad heterogeneous behavior to
and grad students/C a group significantly reduces
the overall group coordination,
and might be an important
indicator of human-robot
group dynamics.” (p. 1716)
Jung et al. (2013)/ Humanoid (Maddox T+S n = 73/age: range 18–40 Back- • Back-channeling • Team functioning (+) • "subtle back-channeling by
communication/VII and Nexi), years (M = 25.0, SD = channeling, • Perceived robot robots in human–robot teams
functional (UAV, 6.19); from university social signaling engagement (+) helped team functioning (lower
not specified)/ community/C • Perceived robot stress, lower cognitive load) and
↔/Physical robot competence ( ) perceived engagement of the
robots, especially when the task
was complex, but at the same
time lead to robots being seen
as less competent.” (p. 1563)
• "the biggest benefits from back-
channeling in human–robot
teams may be seen when tasks
are demanding and complex.”
(p. 1563)
(continued)
Group & Organization Management 48(6)
Table 11. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Jung et al. (2015)/ Functional (Pioneer 3 T+S n = 106 (in 53 teams)/ n.i. • Robot intervention • Awareness of conflict • "we found that the robot’s
communication/IV robot base + OWI 55 m; age: range 18–65 (+) repair interventions increased
robot arm + arm- years (M = 24.5, SD = • Affect (+/n.s.) the groups’ awareness of
control board + 8.0); recruited from • Perceptions of team conflict after the occurrence
speaker)/ university/C members’ of a personal attack thereby
↔/Physical robot contributions (n.s.) acting against the groups’
• Team performance tendency to suppress the
Wolf and Stock-Homburg
Marge et al. (2009)/ Functional (Pioneer T+S n.i./n.i./n.i. n.i. • n.i. • n.i. • Description of the human–
communication/V P2-DX, Segway robot interface TeamTalk
Robotic Mobility
Platform (RMP))/
↔/Simulation/
virtual robot
Nevatia et al. (2008)// Functional/ T n.i./n.i./n.i. n.i. • n.i. • n.i. • Proposal and validation of an
collaboration/VII ↓/Simulation/ “integrated system for
virtual robot semiautonomous cooperative
exploration, augmented by an
intuitive user interface for
efficient human supervision
and control” (p. 2103)
• "having a human in the loop
improves task performance,
especially with larger numbers
of robots” (p.2103)
H. Wang et al. (2010)/ Functional (Pioneer T n = 60 participants Situational • Automated path • System performance • For USAR tasks, automated
(physical) coordination/ P2-AT)/ (acting in teams of 2 -> awareness planning (+) path planning helps to improve
VII ↓/Simulation/ 30 teams)/University of • Team organization team accuracy and
virtual robot Pittsburgh, paid, no (shared authority performance
1709
(USARSim robotic previous experience for robots) (+) • Sharing authority for robots
simulation) with robot control/C during team organization also
helps to improve performance
(re/accuracy and finding)
(continued)
Table 11. (continued)
1710
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
J. Wang et al. (2008)/ Functional (P2DX T n = 19/age: range 19–33 Crandall’s • Needed physical • Team performance • "Automating cooperation [by
(physical) coordination/ robots, Zergs)/ years, from Pittsburgh neglect proximity ( ) using subteams] reduced CD
IV ↓/Simulation/ university/C tolerance • Coordination demands [coordination demands] and
virtual robot model, (n.s.) improved performance.” (p. 9)
situational
awareness • Automation of • Team performance (+)
cooperation • Coordination demands
(+)
Williams et al. (2015)/ Functional (VGo, T n_1 = 28, n_2 = 28/1&2: n.i. • Robot-robot • Perceived creepiness • "silent communication of task-
communication/IV Roompi)/ 14 f, age: range 18–65, communication of the robot (1: n.s.; 2: dependent, human-
↓/Physical robot mostly students/C (verbal, silent) + for silent understandable information
communication) among robots is perceived as
• Perceived creepy by cooperative, co-
trustworthiness of located human teammates” (p.
the robot (1 & 2: n.s.) 24)
• Perceived efficiency of • "increased natural language
the robot (1 & 2: n.s.) interaction with a robot
• Perceived enhances humans’ general
cooperativity of the perceptions of that robot” (p.
robot (1 & 2: n.s.) 38)
You and Robert (2016)/ Functional/humanoid T n = 60/36 f; age: M = n.i. • Training • Individual performance • "training minimized the negative
(physical) coordination/ (adapted from the 22.86 years (SD = (+/n.s.) impacts of curiosity and
®
IV LEGO 4.51); from university • Team performance heightened the positive
®
Mindstorms EV3 in US/C (+/n.s.) impacts of control on task
sets)/↓/Physical involving the use of a robot.”
robot (p.449)
(continued)
Group & Organization Management 48(6)
Table 11. (continued)
Robot Morphologyb/
Author/Subcategory/ Robot Levelc/Type of Team Interactiond/ Data Basis/Participantsf/ Underlying Independent
Disciplinea Embodiment Team Setupe Time Frameg Theories Variable(s)h Dependent Variable(s)h Key Findings
Zheng et al. (2013)/ Humanoid (Robovie- T n_customer = 15, n.i. • n.i. • n.i. • Introduction of simulation tool
communication/V II)/↔/customer & n_operator = 16; for “models for operation
operator: n.a.; n_simulation = 15; timing, customer satisfaction
simulation: n_case study=n.i./ and customer–robot
simulation/virtual customer: 8 f, age: M = interaction” (p. 843), and
robot; case study: 22 years; operator: 7 f, “techniques for managing
physical robot age: M = 21 years; interaction flow and operator
Wolf and Stock-Homburg
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, □= robot.
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1711
Table 12. Empirical Studies on Dyadic HRTs Related to Team Processes and Their Effects.
1712
Bozcuoglu et al. Functional T+S n.i./n.i./n.i. n.i. • n.i. • n.i. • Transparency on robotic behavior
(2015)/ (Quadcopter)/↓, and reactions through
communication/ ↔/Simulation/ communication helps to increase the
VII virtual robot success of HRTs
Breazeal et al. Humanoid T+S n = 21/10m; age: range 20–40; Shared mental • Non-verbal social cues and • Task performance • Non-verbal communication plays an
(2005)/ (“Leo(nardo)")/↓, local campus, no interaction models behavior (understandability of the important role also in the
communicatin/I ↔/Physical robot with robot before/C robot, efficiency of task effectiveness of HRTs
performance, robustness
to errors that arise from
miscommunication) (+)
Chen et al. (2020)/ Functional/`n.i./ T+S n_1 = 201 (simulation), n_2 = n.i. • Trust • Team performance (+/ ; • Proposal of computational model to
trust/V Simulation/virtual 20 (real robot)/1: age: range appropriate level of trust integrate trust into robotic behavior
robot, Physical 18–65 years, mTurk, from needed for best • "maximizing trust alone does not
robot the US, 2: age: range 21–65 performance) always lead to the best performance”
years, from University/C (p. 9:1)
Ciocirlan et al. Humanoid (TIAGo)/ T+ S / n.i. n = 71/40 m, 30 f; age: range Trust theories • Communication (no • Trust (+ for task • "the decrease in trust when the robot
(2019)/ n.i./Simulation/ 14–53 years, M = 24 years communication, text and communication) fails to perform the task is lower
communication/ virtual robot (SD = 6)/C verbal task when [there] is text and verbal
IV communication, text and interaction between the robot and
verbal informal the participant” (p. 7)
communication) • "Trust at the end of the experiment
was higher than the initial trust when
the participants had a text and verbal
interaction communication related
to the task” (p. 7)
Freedy et al. (2007)/ Functional (unmanned T+S n = 12/4 f; age: range 18–25 Collaborative • Robot competency • Time to complete mission • Introduction of an objective measure
trust/III ground vehicle)/ years, most with several performance ( ) of trust dependent on the number of
↓/Simulation/virtual years of gaming experience, model • Operator intervention ( ) operator overrides/interventions
robot 1.5 hours of training/L (15 • Workload ( ) • Knowledge about robot
trials/participant, 5 trials of 3 • Trust • Human intervention (-/+-; competencies and characteristics
competency levels in firing appropriate level of trust (e.g., level of performance) can help
behavior each) needed) to foster trust
(continued)
Group & Organization Management 48(6)
Table 12. (continued)
Author/ Robot Morphologyb/ Team
Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings
Hoffman and Humanoid (“Leo")/↓, T+S n.i./n.i./C Dialog theory, • n.i. • n.i. • Proposal of a framework for dynamic
Breazeal (2004)/ ↔/Physical robot joint intention collaboration
collaboration/I theory • To establish successful HRTs, robots
and humans have to share the same
goals, communicate with each other
and show commitment to jointly
reach their goals
Wolf and Stock-Homburg
Hoffman and Functional (Symon, T+S n = 32/15 f; MIT community, n.i. • Robot anticipatory action • Task efficiency (+/n.s.) • Anticipatory action of a robotic
Breazeal (2007)/ forklift-like)/ laboratory/C • Perceived robot teammate helps to increase task
collaboration/I ↔/Simulation/ contribution to team efficiency and improves “the
virtual robot fluency (+) perceived commitment of the robot
• Perceived robot to the team and its contribution to
contribution to team team’s fluency and success” (p. 1)
success (+)
• Perceived robot
commitment (+)
Koppula et al. Functional/↓/Physical T+S n = 5/n.i./n.i. n.i. • Anticipatory planning • Perceived robot • Proposal of graphical model to
(2016)/ robot (Kodiak collaboration (+) anticipate human actions
collaboration/I [PR2])/simulation/ • Perceived robot timing (+)
virtual robot • Satisfaction with robot (+)
• Willingness to work with
the robot (+)
• Time savings (+ /not stated
explicitly)
Lo et al. (2020)/ Functional/↔/Physical T+S n = 16/8 f, visitors or students n.i. • Robot motion planning • Perceived clarity of intent • Proposal of model for multi-agent
communication/V robot at the campus/C approach (nested (+) planning based on partner’s
inference for • Motion predictability and knowledge and behavior (NICA)
corroborative acts naturalness (+) • Experiment shows that NICA “is
(NICA) versus legible • Perceived social perceived as significantly more
motion) appropriateness (+) natural, socially appropriate, and
• Perceived safety, fluent to team with, while being both
intelligence, capabilities, more predictable and intent-clear”
thoughtfulness, and fluency (p. 326)
to team with of the robot
(n.s./+)
(continued)
1713
Table 12. (continued)
1714
Marble et al. (2003)/ Functional (ATRVJr)/ T+S n = 11/1 f, 10 m; 4 expert users, n.i. • Mixed-initiative • Adaptation to autonomy • Utilization of robot autonomous
collaboration/VII ↓/Physical robot 7 no or some prior interaction (not reported) capabilities depends on previous
experience; INEEL • Perceived ease to predict robotic experience of users
employees/C outcome of control (not (inexperienced users utilize
reported) autonomy more willingly)
• Control challenges should be
considered
Nikolaidis et al. Functional/ T+S n_1 = 36, n_2 = 24/1: recruited Shared mental • Team training (human–robot • Mental model • "cross-training yields statistically
(2015)/(physical) ↓/Simulation/virtual from MIT; 2: n.i./C models cross-training) convergence (+) significant improvements in
coordination/IV robot, Physical • robot trustworthiness quantitative team performance
robot (+) measures, as well as significant
• Team fluency (+) differences in perceived robot
performance and human trust”
• Team training (human–robot • Objective and subjective (p.1711)
cross-training) without measures of team • "This study supports the hypothesis
learning component in fluency and participant’s that the effective and fluent teaming
algorithm satisfaction (n.s. ofa human and a robot may best be
achieved by modeling known,
effective human teamwork
practices.” (p. 1711)
Nikolaidis and Shah Functional/↓, T+S n = 36/recruited from MIT/C Shared mental • Team training (human– • Mental model convergence • A good way to achieve effective and
(2013)/(physical) ↔/Physical robot models robot cross-training) (+) fluent human–robot teaming may be
coordination/IV • Mental model similarity (+) to model effective practices for
• Team fluency (concurrent human teamwork (p. 33)
motion, idle time) (+) • Human–robot cross-training leads to
• Perceived robot “statistically significant
performance (+) improvements in quantitative team
• Human trust (+) performance measures” (p. 33)
(compared to standard
reinforcement learning techniques)
Nikolaidis et al. Humanoid (HERB)/ T+S n_1 = 151 (from initial 200- Game theory • Robot communication • Trust in the robot (+ /n.s.) • "enabling the robot to issue verbal
(2018)/ ↔/Video/image of exclusions)/1: 60% female, • Adaption to robot (+ /n.s.) commands is the most effective form
communication/I a robot (video age: M = 35 years, from US, of communicating objectives, while
playback) mTurk/C retaining user trust in the robot.” (p.
22:1)
(continued)
Group & Organization Management 48(6)
Table 12. (continued)
Author/ Robot Morphologyb/ Team
Subcategory/ Robot Levelc/Type of Interactiond/ Data Basis/Participantsf/Time Underlying
Disciplinea Embodiment Team Setupe Frameg Theories Independent Variable(s)h Dependent Variable(s)h Key Findings
Shah et al. (2011)/ Humanoid (Nexi, T+S n = 16 subjects/10 m; age: M = n.i. • Usage of robot plan • Human idle time ( ) • Chaski (task-level executive for
(physical) a Mobile- 29.4 years (SD = 16.1), execution system Chaski • Time to complete task (n.s.) robots) is able to reduce human idle
collaboration/I Dexterous-Social recruited from the MIT and • Robot trustworthiness (+) time significantly and by this
(MDS) robot)/ Greater Boston area/C • Team fluency (n.s.) supports the hypothesis that it can
↔/Physical robot • Perceived robot help to increase team performance
performance (n.s.)
• Sharing of common goals
Wolf and Stock-Homburg
(n.s.)
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction, n.i. = no information provided by author(s).
e
Team setup: s = human, = robot.
□
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1715
1716 Group & Organization Management 48(6)
cognitive science (6), HRI (5), ethics (2), management (1), and military (1)
research. All 9 empirical studies are cross-sectional, but they feature both
laboratory and field experiments. Teams are complex though, so the com-
paratively few studies that take an integrative perspective seems surprising.
These studies include both functional and humanoid robots, such as those
adapted from Lego® Mindstorms® sets (You & Robert, 2018b; 2019a;
2019b), but not any social robots. Multiple-member HRTs all reflect human-
directed robot teams that are, in most cases, collaborative. Studies of dyadic
HRTs also mostly consider dyadic collaborative teams and include human-
directed robots or an autonomous human–robot pairing.
About half of the studies in this category specify their theoretical foun-
dation and build, for example, on motivational theories of individual and team
motivation (Kanfer et al., 2008). Other theories include media synchronicity (Dennis
et al., 2008); the technology acceptance model (Davis, 1986) and the unified model
of technology acceptance and use of technology (Venkatesh et al., 2003); social
identity theory; the IPO model; notions of trust in relation to teamwork (Zaheer et al.,
1998), technology (McKnight et al., 2011), and robots (Yagoda & Gillan, 2012); and
social categorization and attraction theories (Hogg & Turner, 1985).
Discussion
Summary of Findings of Existing Research
Despite vastly different definitions of HRTs and distinct research foci, re-
searchers from multiple disciplines all pursue insights into their aspects and
related processes. In Figure 5 we summarize the main categories and sub-
categories linked to the IPO model of teams. Because so few studies examine
moderating effects, we cannot identify further subcategories. In addition, we
find that extant research exhibits a dominant focus on HRT inputs and
processes, so we do not elaborate further on the subcategories of team outputs.
Intra-member team characteristics are considered less frequently than
other topics and primarily in relation to team setups and processes, probably
due to their interdependencies with HRI and HRC. Nonetheless, research on
Wolf and Stock-Homburg 1717
robot behavior is rooted in a HRT context and reveals that positive robot
behaviors and transparency exert positive effects on team processes and
outcomes. In studies that examine both physical and behavioral robotic
characteristics, we also find important hints for research directions, especially
in terms of a holistic robot design. Human preferences and behaviors are
equally interesting topics to include in efforts to understand HRTs fully.
Inter-member team characteristics have been examined more extensively;
autonomy, control, and leadership are included in many studies. Vastly dif-
ferent definitions of HRTs, across a variety of team setups (e.g., leadership),
affirm the logic of this central focus. Yet we also note that all empirical studies
on (sliding) autonomy and control in HRTs indicate that (partially) autono-
mous robots and shared control can facilitate the work of human team
members and make HRTs more efficient.
Compared with individual team members and team characteristics, team
processes in HRTs and their effects have been investigated very intensively.
Physical coordination has long been a topic, primarily with a focus on robotics
aspects and the development of coordination concepts, but collaboration in
HRTs has come to the attention of researchers only more recently. Here,
interesting parallels are being drawn between HRTs and all-human teams with
regard to the benefits of coordination or communication mechanisms. In
general, studies indicate that well-choreographed coordination and commu-
nication efforts are key success factors for HRTs.
1718 Group & Organization Management 48(6)
The presence of moderating effects in HRTs is coming more into focus, and
it remains an important consideration because moderator variables exert
effects on the relationships of team inputs, processes, and outputs. There is not
one size that fits all HRTs, so further investigations should seek insights into
relevant moderators and their effects.
Finally, integrative and overarching studies are lacking, despite their
importance for gaining a holistic, deep understanding of the mechanisms in
HRTs. Here, we note that HRTs are complex systems that require intensive
research, and using insights from all-human team research could help clarify
them, especially in real-world settings. For example, You and Robert (2018b)
discuss a loop in the IPO model that conceptually may be plausible for HRTs.
Author/Disciplinea/ Moderator
Main Category Independent Variable(s)b Dependent Variable(s)b Variable(s) Moderating Effectb
Claure et al. (2020)/V/ • Robot fairness • User trust • Human • (+ for weak performers;
Cat. 1 • Perceived robot capabilities n.s. otherwise)
Wolf and Stock-Homburg
fairness
Correia, Petisca, et al. • Robot goal orientation • Competitiveness Index • Session • Mixed results, see study
(2019)/IV/Cat. 1 (performance-driven vs. learning- • McGill Friendship number for details
driven) Questionnaire
• Relationship
Assessment Scale
• Godspeed
Questionnaire
Jung et al. (2013)/VII/ • Back-channeling • Team functioning • Task • (+)
Cat. 3 • Perceived robot complexity • (+)
engagement • (n.s.)
• Perceived robot
competence
You and Robert (2016)/ • Training • Individual performance • Curiosity • (+ /n.s.)
IV/Cat. 3 • Team performance • Control • (n.s./-)
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
1719
1720
Marble et al. (2004)/VII/Cat. 2 • Dynamic robot • Target detection • Session number • (+)
autonomy • Situation awareness • (+)
Richert et al. (2016)/II/Cat. 1 • Personal characteristics • Task performance • Subjective behavior • Not reported
• Robot characteristics o Stress
o Cooperation
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Empirical findings: ( ) = negative effect, (+) = positive effect, (n.s.) = not significant.
Group & Organization Management 48(6)
Wolf and Stock-Homburg 1721
members. We also recognize the common risk of a publication bias for our
study (Jager et al., 2020). A publication bias largely occurs before and during
scientific review processes, leaving us with limited possibilities to overcome it
completely (information on our efforts to address potential biases can be found
in Supplementary Appendix A).
How can robots be team members?. It would greatly advance the field if re-
search were to explain the mechanisms that underlie interaction in HRTs, based
on behavioral theories. Currently, no overall theory exists for HRTs, which leads
to unsteady theoretical foundations. Moreover, most studies do not offer solid
theoretical justification for their predictions (see, e.g., summary of limitations).
An approach already being used by some researchers relies on investigations of
all-human teams as bases for HRT research, which ensures a more theory-driven
effort (Krämer et al., 2012). In addition to social identity theory (Tajfel, 1974),
shared mental models, and gender studies, leader–member exchange theory as
applied to all-human teams (van Breukelen et al., 2006) might be a suitable
theoretical basis for research on HRTs with social robots in particular. Another
valuable effort might seek insights into how individuals, companies, and society
can prepare for HRTs. Since all studies we found during our review are focusing
on existing HRTs (see Tables 2–17), many open questions remain regarding how
to prepare for HRTs. Researchers have a broader responsibility than core HRT
topics; in particular, they should address the transition toward HRTs and how
individuals, companies, and society can engage beneficially in it.
Arnold and Scheutz (2017)/ n.i. n.i. • There are many ethical questions currently unsolved in HRI
ethics/VIII/T+S • "Robots do not have to be teammates to work with a team, especially given the ethical and empirical
question of how the whole range of physical presence with a robot can affect others.” (p. 449)
Ma et al. (2018)/HRT design/I/ n.i. n.i. • Overview of important considerations for the design of HRTs, including team and teamwork
T+S components
Oleson et al. (2011)/ n.i. • Inappropriate levels of trust can lead to disuse and/or misuse of robots
integrative study/IV/T+S • Proposal of a framework for human–robot trust
Robert (2018)/integrative Motivational theories of n.i. • Proposal of “Motivational Theory of Human–Robot Teamwork” based on: emotional stability,
study/IV/T+S individual and team extraversion, openness to experience, agreeableness, conscientiousness of a robot
motivation
Tamburrini (2009)/ethics/VIII/ n.i. n.i. • Robot ethics is a growing field that gains importance with the developments of new robots and
T+S technology
You and Robert (2018b)/ IPO model, trust theories n.i. • Proposal of working framework for HRTs based on IMOI (inputs-mediators-outputs-inputs)
integrative study/I/T+S framework
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics; studies
are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
Team interaction: T = task interaction, T+S = task & social interaction.
c
Effect: = positive effect, = negative effect, = not significant, = effect not reported.
d
None of the studies provide information on robot morphology, robot level, or type of embodiment. Only two studies provide information on team setup,
focusing on autonomous mixed teams (Ma et al., 2018) and human-directed robot teams (You & Robert, 2018b), respectively.
Group & Organization Management 48(6)
Table 16. Empirical Integrative and Overarching Studies on Multiple-Member HRTs.
Robot
Author/ Morphologyb/
Subcategory/ Robot Levelc/Type Team Interactiond/Team Data Basis/Participantsf/
Disciplineb of Embodiment Setupe Time Frameg Underlying Theories Research Frameworkh Key Findings
Burke et al. Functional T n = 31/participants from n.i. n.i. • Proposal and validation of measurement
(2008)/ (Telemax UGV, FEMA USAR teams in instruments for assessment of usability (team
Metrics/I Matilda UGV, the US/C (in 2 phases) member), incidents (observer) and team
Dragonrunner processes (observer) in HRTs
UV, AirRobot
Wolf and Stock-Homburg
UAV)/↓/Physical
robot
Giachetti et al. n.i./n.i./Simulation/ T/Combinations of n.i./n.i./n.i. Shared mental models • Number of robots (2,4) • Performance
(2013)/ virtual robot n_robot={2,4} & • Team size (6,12) • Effectiveness (see key findings and study for detailed • Proposal and validation of agent-based
integrative n_team={6,12} • Team centralization (low, high) results and interaction effects) simulation model for the examination of team
study/III • Danger level (30%, 70%) designs
• Robot reliability (6, 10 hours) • "there are limits to the number of robots that
a team can effectively manage” (p. 25)
• "larger teams have more robust performance
over the noise [i.e., not controllable] factors”
(p. 15)
• "robot reliability is critical to the formation of
human-robot teams” (p. 15)
• "high centralization of decision-making
authority created communication
bottlenecks at the commander in large
teams” (p. 15)
Pina et al. n.i./↓/n.i. T+S n = 16/age: range 19–49 n.i. n.i. • Proposal of generalizable metric classes for the
(2008)/ years/C (four 8-minute evaluation of HRTs and illustration of need
metrics/I sessions with different for these with case study
robotic team sizes)
Robert and Functional/ T+S n = 30 (15 teams)/14 f; age: n.i. n.i. • "subgroups formed between humans and their
You (2015)/ humanoid M = 24.7 (SD = 7.48); robots were negatively correlated with
integrative (adapted from from large university in various team outcomes” (p. 1)
®
study/IV the LEGO US/C (laboratory)
®
Mindstorms
EV3 sets)/
↓/Physical robot
1723
(continued)
Table 16. (continued)
1724
Robot
Author/ Morphologyb/
Subcategory/ Robot Levelc/Type Team Interactiond/Team Data Basis/Participantsf/
Disciplineb of Embodiment Setupe Time Frameg Underlying Theories Research Frameworkh Key Findings
You and Functional/ T+S n = 114 (in 57 teams)/ Media richness (channel • Emotional attachment of teams to robots leads
Robert humanoid 51 m; age: M = 23 years expansion theory, cognitive to better performance
(2017)/ (adapted from (SD = 5.3); from online model of media choice, media • "Both robot and team identification increased
®
integrative the LEGO subject pool at synchronicity), technology a team’s emotional attachment to its robots”
®
study/IV Mindstorms a Midwestern university acceptance model, unified (p. 377)
EV3 sets)/ in US/C (duration with model of technology
↓/Physical robot robots approx. 25–30 acceptance and use of
minutes) (between- technology, social identity
subjects) theory
You and Functional/ T+S n = 108 (54 teams)/54 Social categorization and • "robot identification increased trust in robots
Robert humanoid men; age: M = 24 years; attraction theories, trust and team identification increases trust in
(2019b)/ (adapted from from subject pool at theories one’s teammates” (p. 244)
®
integrative the LEGO a Midwestern university • "Trust in robots increases team performance
®
study/IV Mindstorms in US/C (duration while trust in teammates increases
EV3 sets)/ approx. 25–30 minutes) satisfaction” (p. 244)
↓/Physical robot
You and Functional/ T+S n = 88 (44 teams)/42 f; age: Social identity theory, trust • Subgroups can form in HRTs (when humans
Robert humanoid M = 23.6 (SD = 4.1); theories identify with their robots)
(2019a)/ (adapted from from large university in • "Robot identification and team identification
®
integrative the LEGO US/C (duration approx. moderate . . . negative effects of subgroup
®
study/IV Mindstorms 25–30 minutes) formation on teamwork quality and
EV3 sets)/ subsequent team performance” (p. 1)
↓/Physical robot
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics; studies are
categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot. □
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
h
Group & Organization Management 48(6)
Effect: = positive effect, = negative effect, = not significant, = effect not reported.
Table 17. Empirical Integrative and Overarching Studies on Dyadic HRTs.
Robot
Morphologyb/
Author/ Robot Levelc/ Team
Subcategory/ Type of Interactiond/ Data Basis/Participantsf/
Disciplinea Embodiment Team Setupe Time Frameg Research Frameworkh Key Findingsi
Visser et al. n.i./↓/n.i. T+S n = 12/4 f, age: range 18– n.i. • Proposal and validation of
(2006)/ 25 years/C (3x5x6 measurement
metrics/I/III mixed factorial design (2 methodology for team
within, 1 between)) performance of HRTs
You and Functional T+S n = 200/77 m, age: range • Human–robot (work-
Robert (PR2)/ 18–68 years (M= 36.5, style) similarity helps to
Wolf and Stock-Homburg
Note: aDisciplines: I = HRI, II = management, III = military, IV = cognitive science, V = robotics, VI = space, VII = (urban) search and rescue, VIII = ethics;
studies are categorized based on a “best fit”-approach and might comprise aspects of more than one considered research discipline.
b
n.i. = no information provided by author(s).
c
Robot level: ↓ = robot on lower level, ↔ = robot on same level, ↑ = robot on higher level.
d
Team interaction: T = task interaction, T+S = task & social interaction.
e
Team setup: s = human, = robot.
□
f
Participants: f = female, m = male.
g
Time frame: C = cross-sectional, L = longitudinal.
1725
h
Effect: = positive effect, = negative effect, = not significant, = effect not reported.
i
None of the studies indicated underlying theories.
1726 Group & Organization Management 48(6)
and differences. Insights along these lines could improve the management of
HRTs in organizations and support human teammates. Second, research
comparisons might address different application scenarios of HRTs in or-
ganizations. Most HRT research addresses specific application scenarios, such
as rescue robots in USAR (Kruijff-Korbayová et al., 2015) or robots working
on the International Space Station (Fong et al., 2005). Insights on HRTs in
organizations in an office environment are still scarce (see disciplines of
studies in different categories). With an online survey, we learned that ac-
ceptance of robots in work-related HRTs has increased, especially during the
COVID-19 pandemic. Results suggest four potential roles for robots in
(Figure 6): (1) robotic team assistant supporting administrative and co-
ordination work, (2) robotic knowledge expert providing expertise in a specific
field, (3) robotic scrum master (Scrum Alliance, 2021) working with the team
and ensuring that the team lives up to agile values and principles, such as
through coaching, and (4) robotic team leader with institutionalized authority
over other team members. Third, researchers should examine HRTs in real-life
settings. The studies we reviewed are overwhelmingly conceptual or cross-
sectional laboratory studies (see study characteristics in different categories),
with limited capacity to transfer the findings to real-life settings (Levitt & List,
2005). Especially, noting current developments in the world economy and the
increasing relevance of robots in everyday contexts, continued research
Figure 6. Potential roles of robots in HRTs. Note: Sources for icons: top left icon:
Scrum by Sharon Showalker; top right icon: leader by Oksana Latysheva; bottom right
icon: to do list by ArtWorkLeaf; bottom left icon: Brain by Alla Zeluska, all from
thenounproject.com
Wolf and Stock-Homburg 1727
Conclusion
Human–robot teams are an emerging phenomenon and part of the future of
work and society. Yet extant research lacks some important insights. With this
review, we establish some unexplored research areas, many of which pertain
to real-life, long-term HRT deployment considerations. We offer six prop-
ositions for continued research, reflecting the strong relevance of the topic and
considering current developments in the world economy. We hope this review
provides inspiration for ongoing HRT studies.
Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research project is funded by the
German Federal Ministry of Education and Research (BMBF) within the KompAKI
project. The authors are responsible for the content of this publication.
Supplemental Material
Supplemental material for this article is available online.
ORCID iD
Franziska Doris Wolf https://orcid.org/0000-0002-7125-7597
Notes
1. The survey participants were recruited via Amazon Mechanical Turk. We sought
business leaders; they had an average of 6.99 (SD = 6.431) years of leadership ex-
perience in various industries, including IT (23.2%), banking/insurance (13.8%), and
health care/social sectors (9.9%). These leaders were responsible for teams (45.3%),
departments (29.9%), business areas (11.1%), or the whole company (13.8%). The
1728 Group & Organization Management 48(6)
survey introduced social robots and their potential roles in organizations and issued the
prompt “I can imagine having a robot as an assistant/colleague/supervisor,” which
participants answered on a 5-point Likert scale (1 = “not at all,” 5 = “absolutely”).
3. In line with Breazeal (2003) and Fong et al. (2003), the low level of social interaction
(see Figure 2) includes so-called “socially evocative” (Breazeal 2003, p. 169) robots
that elicit social responses from humans without responding socially to them.
4. Using our proposed definition, we can distinguish HRTs from related concepts, such
as human–robot interaction (HRI) or human–robot collaboration (HRC). In particular,
HRI is “the study of the humans, robots, and the ways they influence each other” Fong,
Thorpe, and Baur (2001, p. 257), and HRC implies humans and robots “working jointly
with others or together especially in an intellectual endeavor” Green, Billinghurst,
Chen, and Chase (2008, p. 1). Similar to HRTs, the involved parties (robots and
humans) interact, such as by expressing or responding to emotions Kreijns et al.,
(2003). Yet HRC and HRTs are narrower than HRI, in that they pursue the achievement
of joint goals (Bradshaw et al. (2009); Marge et al. (2009); You and Robert (2018b)).
Uniquely in HRTs, team members work both interdependently and together (Bradshaw
et al. (2009); Ma et al. (2018)).
References
Abrams, A. M. H., & der Pütten, A. M. R. (2020). I–C–E framework: Concepts for
group dynamics research in human-robot interaction. International Journal of
Social Robotics, 12(6), 1213-1229. https://doi.org/10.1007/s12369-020-00642-z.
ACM (2007). Proceedings of the 2007 ACE/IEEE Conference on Human-Robot
Interaction: Robot as Team Member. Association for Computing Machinery,
New York, NY, USA.
Adams, J. S. (1963). Toward an understanding in inequity. Journal of Abnormal
Psychology, 67, 422-436. https://doi.org/10.1037/h0040968.
Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed), Advances in
experimental social psychology (2nd ed., pp. 267-299). Elsevier, Burlington.
https://doi.org/10.1016/s0065-2601(08)60108-2.
Alboul, L., Saez-Pons, J., & Penders, J. (2008). Mixed human-robot team navigation in
the GUARDIANS project. In SSRR 2008: IEEE International Workshop on
Safety, Security and Rescue Robotics (pp. 95-101). IEEE Xplore. https://doi.org/
10.1109/SSRR.2008.4745884.
Ambrose, R. O., Aldridge, H., Askew, R. S., Burridge, R. R., Bluethmann, W., Diftler,
M., Lovchik, C., Magruder, D., & Rehnmark, F. (2000). Robonaut: NASA’s space
Wolf and Stock-Homburg 1729
humanoid. IEEE Intelligent Systems and Their Applications, 15(4), 57-63. https://
doi.org/10.1109/5254.867913.
Arnold, T., & Scheutz, M. (2017). Beyond moral dilemmas: Exploring the ethical
landscape in HRI. In 12th ACM/IEEE International Conference on Human-Robot
Interaction (HRI) (pp. 445-452). IEEE. https://doi.org/10.1145/2909824.3020255
Arnold, T., & Scheutz, M. (2018). Observing robot touch in context: How does touch
and attitude affect perceptions of a robot’s social qualities? In HRI ’18, Pro-
ceedings of the 2018 ACM/IEEE International Conference on Human-Robot
Interaction. ACM. https://doi.org/10.1145/3171221.3171263.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in
social psychological research: Conceptual, strategic, and statistical considerations.
Journal of Personality and Social Psychology, 51(6), 1173-1182. https://doi.org/
10.1037//0022-3514.51.6.1173.
Barrick, M. R., Stewart, G. L., Neubert, M. J., & Mount, M. K. (1998). Relating
member ability and personality to work-team processes and team effectiveness.
Journal of Applied Psychology, 83(3), 377-391. https://doi.org/10.1037/0021-
9010.83.3.377.
Bartneck, C., Reichenbach, J., & Carpenter, J. (2006). Use of Praise and Punishment in
Human-Robot Collaborative Teams. In ROMAN 2006 - The 15th IEEE In-
ternational Symposium on Robot and Human Interactive Communication (pp.
177–182). IEEE. https://doi.org/10.1109/ROMAN.2006.314414
Bell, S. T., & Marentette, B. J. (2011). Team viability for long-term and ongoing
organizational teams. Organizational Psychology Review, 1(4), 275-292. https://
doi.org/10.1177/2041386611405876.
Bluethmann, W., Ambrose, R., Diftler, M., Askew, S., Huber, E., Goza, M., Rehnmark,
F., Lovchik, C., & Magruder, D. (2003). Robonaut: a robot designed to work with
humans in space. Autonomous Robots, 14(2-3), 179-197. https://doi.org/10.1023/
a:1022231703061.
Bozcuoglu, A. K., Yazdani, F., Beßler, D., Togorean, & Beetz, M. (2015). Reasoning
on communication between agents in a human-robot rescue team. In A. Aly,
S. Griffiths, F. Stramandinoli, & Chairs (Eds), In Towards Intelligent Social
Robots: Current Advances in Cognitive Robotics: Workshop in Conjunction with
Humanoids 2015. Seoul, South Korea.
Bradshaw, J. M., Dignum, V., Jonker, C., & Sierhuis, M. (2012). Human-agent-robot
teamwork. IEEE Intelligent Systems, 27(2), 8-13. https://doi.org/10.1109/mis.2012.37.
Bradshaw, J. M., Feltovich, P., Johnson, M., Breedy, M., Bunch, L., Eskridge, T., Jung,
H., Lott, J., Uszok, A., & van Diggelen, J. (2009). From tools to teammates: Joint
activity in human-agent-robot teams. In M. Kurosu (Ed), Human Centered Design
(pp. 935-944). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-
02806-9_107.
Braga, J. L. (1972). Role theory, cognitive dissonance theory, and the interdisciplinary
team. Interchange, 3(4), 69-78. https://doi.org/10.1007/BF02145409.
Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3-
4), 167-175. https://doi.org/10.1016/s0921-8890(02)00373-1.
1730 Group & Organization Management 48(6)
Breazeal, C., Brooks, A., Chilongo, D., Gray, J., Hoffman, G., Kidd, C. D., Lee, H.,
Lieberman, J., & Lockerd, A. (2004). Working collaboratively with humanoid
robots. In 4th IEEE/RAS International Conference on Humanoid Robots, 2004.
IEEE. Santa Monica, CA, USA. https://doi.org/10.1109/ICHR.2004.1442126
Breazeal, C., Hoffman, G., & Lockerd, A. (2004). Teaching and working with robots as
a collaboration. In N. Jennings (Ed), Proceedings of the third International joint
conference on autonomous agents & multiagent systems (3, pp. 1030-1037). As-
sociation for Computing Machinery, USA. https://doi.org/10.1109/AAMAS.2004.
242646.
Breazeal, C., Kidd, C. D., Lockerd Thomaz, A., Hoffman, G., & Berlin, M. (2005).
Effects of nonverbal communication on efficiency and robustness in human-robot
teamwork. IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS). IEEE. Edmonton, Cananda. https://doi.org/10.1109/iros.2005.1545011.
Brown, S., Toh, J., & Sukkarieh, S. (2005). A flexible human-robot team framework
for information gathering missions. In C. Sammut (Ed), Proceedings of the 2005
Australasian Conference on Robotics & Automation. Australian Robotics and
Automation Association Inc. Sydney.
Bruemmer, D. J., Marble, J. L., & Dudenhoeffer, D. D. (2002). Mutual initiative in
human-machine teams. In J. J. Persensky, B. Hallbert, & H. Blackman (Chairs
(Eds), New century, new trends: Proceedings of the IEEE 7th Conference on
Human Factors and Power Plants. IEEE. https://doi.org/10.1109/HFPP.2002.
1042863.
Bruemmer, D. J., & Walton, M. C. (2003). Collaborative tools for mixed teams of
humans and robots [Conference presentation]. International Workshop on Multi-
Robot Systems. Washington, DC, USA.
Burke, J. L., & Murphy, R. R. (2004). Human-robot interaction in USAR technical search:
two heads are better than one. In RO-MAN 2004. 13th IEEE International Workshop
on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759) (pp.
307–312). IEEE. https://doi.org/10.1109/ROMAN.2004.1374778.
Burke, J. L., & Murphy, R. R. (2007). RSVP: An investigation of remote shared visual
presence as common ground for human-robot teams. In Proceedings of the 2007
ACM/IEEE Conference on Human-Robot Interaction: Robot as team member
(p. 161–168). ACM. https://doi.org/10.1145/1228716.1228738.
Burke, J. L., Pratt, K. S., Murphy, R. R., Lineberry, M., Taing, M., & Day, B. (2008).
Toward developing HRI metrics for teams: Pilot testing in the field. In C. R.
Burghart, & A. Steinfeld (Chairs), Proceedings of metrics for human-robot in-
teraction workshop in affiliation with the 3rd ACM/IEEE international conference
of human-robot interaction (HRI 2008). Amsterdam, The Netherlands.
Canning, C., Donahue, T. J., & Scheutz, M. (2014). Investigating human perceptions of
robot capabilities in remote human-robot team tasks based on first-person robot
video feeds IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems. IEEE, Chicago, IL, USA. https://doi.org/10.1109/iros.2014.6943178.
Chen, M., Nikolaidis, S., Soh, H., Hsu, D., & Srinivasa, S. (2020). Trust-aware de-
cision making for human-robot collaboration: Model learning and planning. ACM
Wolf and Stock-Homburg 1731
Demir, M., McNeese, N. J., & Cooke, N. J. (2020). Understanding human-robot teams
in light of all-human teams: Aspects of team interaction and shared cognition.
International Journal of Human-Computer Studies, 140, 102436. https://doi.org/
10.1016/j.ijhcs.2020.102436.
Deng, E., Mutlu, B., & Mataric, M. J. (2019). Embodiment in socially interactive robots.
Foundations and Trends® in Robotics, 7(4), 251-356. https://doi.org/10.1561/2300000056.
Dennis, A. R., Fuller, R. M., & Valacich, J. S. (2008). Media, tasks, and commu-
nication processes: A theory of media synchronicity. MIS Quarterly, 32(3), 575.
https://doi.org/10.2307/25148857.
Dennis, A. R., & Kinney, S. T. (1998). Testing media richness theory in the new media:
The effects of cues, feedback, and task equivocality. Information Systems Re-
search, 9(3), 256-274. https://doi.org/10.1287/isre.9.3.256.
Dias, M. B., Kannan, B., Browning, B., Jones, G., Argall, B., Dias, M. F., Zinck, M.,
Veloso, M. M., Stentz, A., & April, A. (2008). Sliding autonomy for peer-to-peer
human-robot teams (CMU-RI-TR-08-16). Pittsburgh, PA: Carnegie Mellon
University.
Dudenhoeffer, D. D., Bruemmer, D. J., & Davis, M. L. (2001). Modeling and sim-
ulation for exploring human-robot team interaction requirements. Proceeding of
the 2001 Winter Simulation Conference (Cat. No.01CH37304) (11, pp. 730-739).
IEEE. https://doi.org/10.1109/WSC.2001.977361.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems.
Human Factors, 37(1), 32-64. https://doi.org/10.1518/001872095779049543.
Eyssel, F., & Kuchenbrandt, D. (2012). Social categorization of social robots: An-
thropomorphism as a function of robot group membership. The British Journal of
Social Psychology, 51(4), 724-731. https://doi.org/10.1111/j.2044-8309.2011.
02082.x.
Fiore, S. M., Badler, N. L., Boloni, L., Goodrich, M. A., Wu, A. S., & Chen, J. (2011).
Human-robot teams collaborating socially, organizationally, and culturally. Pro-
ceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1),
465-469. https://doi.org/10.1177/1071181311551096.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive
robots. Robotics and Autonomous Systems, 42(3), 143-166. https://doi.org/10.
1016/S0921-8890(02)00372-X.
Fong, T., Nourbakhsh, I., Kunz, C., Fluckiger, L., Schreiner, J., Ambrose, R., Burridge, R.,
Simmons, R., Hiatt, L. M., Schultz, A., Trafton, J. G., Bugajska, M., & Scholtz, J.
(2005). The peer-to-peer human-robot interaction project SPACE Conferences and
Exposition: Space 2005. American Institute of Aeronautics and Astronautics. Long
Beach, California. https://doi.org/10.2514/6.2005-6750.
Fong, T., Scholtz, J., Shah, J. A., Fluckiger, L., Kunz, C., Lees, D., Schreiner, J., Siegel,
M., Hiatt, L. M., Nourbakhsh, I., Simmons, R., Ambrose, R., Burridge, R.,
Antonishek, B., Bugajska, M., Schultz, A., & Trafton, J. G. (2006). A preliminary
study of peer-to-peer human-robot interaction. 2006 IEEE International Con-
ference on Systems, Man and Cybernetics (4, pp. 3198-3203). IEEE. https://doi.
org/10.1109/ICSMC.2006.384609.
Wolf and Stock-Homburg 1733
Fong, T., Thorpe, C., & Baur, C. (2001). Collaboration, dialogue, and human-robot
interaction. In R. A. Jarvis, & Z. Alexander (Eds), Springer Tracts in Advanced
Robotics: Vol. 6, Robotics Research, The Tenth International Symposium, ISRR
2001. Springer.
Forlizzi, J., & DiSalvo, C. F. (2006). Service robots in the domestic environment. HRI ’06,
Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot In-
teraction (p. 258). ACM. https://doi.org/10.1145/1121241.1121286.
Fraune, M. R., Oisted, B. C., Sembrowski, C. E., Gates, K. A., Krupp, M. M., &
Šabanović, S. (2020). Effects of robot-human versus robot-robot behavior and
entitativity on anthropomorphism and willingness to interact. Computers in
Human Behavior, 105. https://doi.org/10.1016/j.chb.2019.106220.
Fraune, M. R., Sabanovic, S., & Smith, E. R. (2017). Teammates first: Favoring
ingroup robots over outgroup humans. 26th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN) (pp. 1432-1437).
IEEE. https://doi.org/10.1109/ROMAN.2017.8172492.
Freedy, A., De Visser, E. J., Gershon, W., & Coeyman, N. C. (2007). Measurement of
trust in human-robot collaboration. International Symposium on Collaborative
Technologies and Systems. Ieee. Orlando, FL, USA. https://doi.org/10.1109/cts.
2007.4621745.
Fuse, Y., & Tokumaru, M. (2020). Social influence of group norms developed by
human-robot groups. IEEE Access, 8, 56081-56091. https://doi.org/10.1109/
ACCESS.2020.2982181.
Gao, F., Cummings, M. L., & Bertuccelli, L. F. (2012). Teamwork in controlling
multiple robots. In HRI ’12. Proceedings of the 7th ACM/IEEE international
conference on Human-robot interaction (pp. 81-88). ACM, New York, NY, USA.
https://doi.org/10.1145/2157689.2157703.
Gervits, F., Thurston, D., Thielstrom, R., Fong, T., Pham, Q., & Scheutz, M. (2020).
Toward genuine robot teammates: Improving human-robot team performance
using robot shared mental models. AAMAS ’20, Proceedings of the 19th In-
ternational Conference on Autonomous Agents and Multi-Agent Systems
(pp. 429-437). International Foundation for Autonomous Agents and Multiagent
Systems.
Giachetti, R. E., Marcelli, V., Cifuentes, J., & Rojas, J. A. (2013). An agent-based
simulation model of human-robot team performance in military environments.
Systems Engineering, 16(1), 15-28. https://doi.org/10.1002/sys.21216.
Gladden, M. E. (2014). The social robot as ’charismatic leader’: A phenomenology of
human submission to nonhuman power. In J. Seibt, R. Hakli, & M. Nørskov (Eds),
Sociable Robots and The Future of Social Relations: Proceedings of Robo-
Philosophy 2014. (Vol. 273). IOS Press, Amsterdam, the Netherlands. https://
doi.org/10.3233/978-1-61499-480-0-329.
Gladstein, D. L. (1984). Groups in context: A model of task group effectiveness.
Administrative Science Quarterly, 29(4), 499-517. https://doi.org/10.2307/
2392936.
1734 Group & Organization Management 48(6)
Gombolay, M. C., Bair, A., Huang, C., & Shah, J. (2017). Computational design of
mixed-initiative human–robot teaming that considers human factors: Situational
awareness, workload, and workflow preferences. The International Journal of
Robotics Research, 36(5-7), 597-617. https://doi.org/10.1177/0278364916688255.
Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F., & Shah, J. A. (2015).
Decision-making authority, team efficiency and human worker satisfaction in
mixed human–robot teams. Autonomous Robots, 39(3), 293-312. https://doi.org/
10.1007/s10514-015-9457-9.
Gombolay, M. C., Huang, C., & Shah, J. A. (2015). Coordination of human-robot
teaming with human task preferences. AAAI Fall Symposium: Technical Report
FW-15-01. AAAI. Arlington, VA, USA.
Goodrich, M. A., McLain, T. W., Anderson, J. D., Sun, J., & Crandall, J. W. (2007).
Managing autonomy in robot teams: Observations from four experiments. Pro-
ceedings of the 2007 ACM/IEEE Conference on Human-Robot Interaction: Robot
as team member (pp. 25–32). ACM. https://doi.org/10.1145/1228716.1228721.
Green, S. A., Billinghurst, M., Chen, X., & Chase, J. G. (2008). Human-robot collab-
oration: A literature review and augmented reality approach in design. International
Journal of Advanced Robotic Systems, 5(1), 1. https://doi.org/10.5772/5664.
Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human–robot
teams. International Studies, 8(3), 483-500. https://doi.org/10.1075/is.8.3.10gro.
Harriott, C. E., Zhang, T., & Adams, J. A. (2011). Evaluating the applicability of
current models of workload to peer-based human-robot teams. HRI ’11, Pro-
ceedings of the 6th ACM/IEEE International Conference on Human-Robot In-
teraction. ACM. https://doi.org/10.1109/ROMAN.2016.7745257.
Hayes, B., & Scassellati, B. (2014). Challenges in shared-environment human-robot
collaboration. AI Matters, 1(2), 22-23. https://doi.org/10.1145/2685328.2685335.
Hentschel, T, Heilman, ME, & Peus, CV (2019). The multiple dimensions of gender
stereotypes: A current look at men’s and women’s characterizations of others and
themselves. Frontiers in Psychology, 10, 11. https://doi.org/10.3389/fpsyg.2019.
00011.
Hiatt, L. M., Harrison, A. M., & Trafton, J. G. (2011). Accommodating human
variability in human-robot teams through theory of mind. In: IJCAI’11 Proceedings
of the Twenty-Second International Joint Conference on Artificial Intelligence -
(Vol 3, pp. 2066-2071). AAAI Press. https://doi.org/10.5591/978-1-57735-516-8/
IJCAI11-345.
Hiatt, L. M., & Trafton, J. G. (2010). A cognitive model of theory of mind. Proceedings
of the 10th International Conference on Cognitive Modeling, ICCM, pp. 91-96.
High-Level Expert Group on Artificial Intelligence (2019). A definition of AI:
Main capabilities and scientific disciplines. Brussels, Belgium. https://digital-
strategy.ec.europa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-
scientific-disciplines.
Hoffman, G., & Breazeal, C. (2004). Collaboration in human-robot teams. AIAA 1st
Intelligent Systems Technical Conference. AIAA. Chicago, IL, USA. https://doi.
org/10.2514/6.2004-6434.
Wolf and Stock-Homburg 1735
Kantor, G., Singh, S., Peterson, R., Rus, D., Das, A., Kumar, V., Pereira, G., & Spletzer,
J. (2006). Distributed search and rescue with robot and sensor teams. In S. Yuta, H.
Asama, E. Prassler, T. Tsubouchi, & S. Thrun (Eds), Field and service robotics:
Recent advances in reserch and applications (pp. 529-538). Springer Berlin
Heidelberg. https://doi.org/10.1007/10991459_51.
Kelley, T. L. (1927). Interpretation of educational measurements. Measurement and
adjustment Series. World Book Co.
Kelly, R., & Watts, L. (2017). Slow but likeable? Inefficient robots as caring team
members. In M. F. Jung, S. Sabanovic, F. Eyssel, & M. R. Fraune (Chairs) (Eds),
Robots in groups and teams: A CSCW 2017 Workshop. https://hri.cornell.edu/
robots-in-groups/.
Kirby, R., Forlizzi, J., & Simmons, R. (2010). Affective social robots. Robotics and
Autonomous Systems, 58(3), 322-332. https://doi.org/10.1016/j.robot.2009.09.015.
Kittmann, R., Fröhlich, T., Schäfer, J., Reiser, U., Weißhardt, F., & Haug, A. (2015).
Let me introduce myself: I am Care-O-bot 4, a gentleman robot. In S. Diefenbach,
N. Henze, & M. Pielot (Eds), Mensch und Computer 2015 – Proceedings
(pp. 223-232). De Gruyter Oldenbourg.
Koppula, H. S., Jain, A., & Saxena, A. (2016). Anticipatory planning for human-robot
teams. In M. A. Hsieh, O. Khatib, & V. Kumar (Eds), Experimental Robotics: The
14th International Symposium on Experimental Robotics (109th ed., pp. 453-470).
Springer International Publishing. https://doi.org/10.1007/978-3-319-23778-7_30.
Kozlowski, S. W. J., & Bell, B. S. (2003). Work groups and teams in organizations. In
W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds), Handbook of Psychology:
Industrial and Organizational Psychology/Industrial and organizational psy-
chology (Vol. 12, pp. 333-375). John Wiley & Sons. https://doi.org/10.1002/
0471264385.wei1214.
Krämer, N. C., Pütten, A., & Eimler, S. (2012). Human-agent and human-robot in-
teraction theory: Similarities to and differences from human-human interaction. In
M. Zacarias, & J. V. de Oliveira (Eds), Studies in Computational Intelligence.
Human-Computer Interaction: The Agency Perspective (396, pp. 215-240).
Springer. https://doi.org/10.1007/978-3-642-25691-2_9.
Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social
interaction in computer-supported collaborative learning environments: a review
of the research. Computers in Human Behavior, 19(3), 335-353. https://doi.org/10.
1016/s0747-5632(02)00057-2.
Kruijff-Korbayová, I., Colas, F., Gianni, M., Pirri, F., Greeff, J., Hindriks, K., Neerincx,
M. A., Ögren, P., Svoboda, T., & Worst, R. (2015). TRADR project: Long-term
human-robot teaming for robot assisted disaster response. Künstliche Intelligenz,
29(2), 193-201. https://doi.org/10.1007/s13218-015-0352-5.[Rainer]
Kruijff, G. J. M., Janı́ček, M., Keshavdas, S., Larochelle, B., Zender, H., Smets,
N. J. J. M., Mioch, T., Neerincx, M. A., Diggelen, J. V., Colas, F., Liu, M.,
Pomerleau, F., Siegwart, R., Hlaváč, V., Svoboda, T., Petřı́ček, T., Reinstein,
M., Zimmermann, K., Pirri, F., ... Gianni, M., Papadakis, P., Sinha, A.,
Balmer, P., Tomatis, N., Worst, R., Linder, T., Surmann, H., Tretyakov, V.,
Wolf and Stock-Homburg 1737
Musić, S., & Hirche, S. (2018). Passive noninteracting control for human-robot team
interaction. IEEE Conference on Decision and Control (CDC). Miami Beach, FL,
USA. https://doi.org/10.1109/cdc.2018.8619289.
Music, S, Salvietti, G, Dohmann, PBG, Chinello, F, Prattichizzo, D, & Hirche, S
(2019). Human–robot team interaction through wearable haptics for cooperative
manipulation. IEEE Transactions on Haptics, 12(3), 350-362. https://doi.org/10.
1109/TOH.2019.2921565.
Nakano, H., & Goodrich, M. A. (2015). Graphical narrative interfaces: Representing
spatiotemporal information for a highly autonomous human-robot team. 24th
IEEE International Symposium on Robot and Human Interactive Communication.
Kobe, Japan. https://doi.org/10.1109/roman.2015.7333684.
Nass, C., Streuer, J., & Tauber, E. R. (1994). Computers are social actors. In B.
Adelson, S. Dumais, & J. Olson (Eds), CHI ’94, CHI ’94: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (pp. 72-78). ACM.
https://doi.org/10.1145/191666.191703.
Natarajan, M., & Gombolay, M. C. (2020). Effects of anthropomorphism and ac-
countability on trust in human robot interaction. HRI’20, Proceedings of the 2020
ACM/IEEE International Conference on Human-Robot Interaction (pp. 33-42).
ACM. https://doi.org/10.1145/3319502.3374839.
Nevatia, Y., Stoyanov, T., Rathnam, R., Pfingsthorn, M., Markov, S., Ambrus, R., &
Birk, A. (2008). Augmented autonomy: Improving human-robot team performance
in urban search and rescue. IEEE/RSJ international Conference on Intelligent
Robots and Systems. Nice, France. https://doi.org/10.1109/iros.2008.4651034.
Nikolaidis, S., Kwon, M., Forlizzi, J., & Srinivasa, S. (2018). Planning with verbal
communication for human-robot collaboration. Journal of Human-Robot In-
teraction, 7(322), 1-2221. https://doi.org/10.1145/3203305.
Nikolaidis, S., Lasota, P., Ramakrishnan, R., & Shah, J. (2015). Improved human–
robot team performance through cross-training, an approach inspired by human
team training practices. The International Journal of Robotics Research, 34(14),
1711-1730. https://doi.org/10.1177/0278364915609673.
Nikolaidis, S., & Shah, J. (2012). Human-robot teaming using shared mental models.
HRI’12. Proceedings of the 7th ACM/IEEE international conference on Human-
Robot Interaction. ACM.
Nikolaidis, S., & Shah, J. (2013). Human-robot cross-training: Computational for-
mulation, modeling and evaluation of a human team training strategy. Proceedings
of the 8th ACM/IEEE International Conference on Human-Robot Interaction.
https://doi.org/10.1109/hri.2013.6483499.
Nourbakhsh, I. R., Sycara, K., Koes, M., Yong, M., Lewis, M., & Burion, S. (2005).
Human-robot teaming for search and rescue. IEEE Pervasive Computing, 4(1),
72-78. https://doi.org/10.1109/MPRV.2005.13.
Oh, J., Navarro-Serment, L., Suppe, A., Stentz, A., & Hebert, M. (2015). Inferring door
locations from a teammate’s trajectory in stealth human-robot team operations
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) (pp. 5315-5320). IEEE. https://doi.org/10.1109/IROS.2015.7354127.
1740 Group & Organization Management 48(6)
Oleson, K. E., Billings, D. R., Kocsis, V., Chen, J. Y. C., & Hancock, P. A. (2011).
Antecedents of trust in human-robot collaborations. 2011 IEEE International
Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and
Decision Support. Miami Beach, FL, USA: CogSIMA). https://doi.org/10.1109/
cogsima.2011.5753439.
Onnasch, L., & Roesler, E. (2020). A taxonomy to structure and analyze human–robot
interaction. International Journal of Social Robotics. https://doi.org/10.1007/
s12369-020-00666-5.
Ososky, S., Schuster, D., Phillips, E., & Jentsch, F. (2013). Building appropriate trust in
human-robot teams. AAAI Spring Symposium: Trust and Autonomous Systems.
Pages, J., Marchionni, L., & Ferro, F. (2016). TIAGo: The modular robot that adapts to
different research needs. International Workshop on Robot Modularity. Daejeon,
Korea.
Pandey, A. K., & Gelin, R. (2018). A Mass-Produced Sociable Humanoid Robot:
Pepper: The First Machine of Its Kind. IEEE Robotics & Automation Magazine,
(25)(1), 40-48. https://doi.org/10.1109/MRA.2018.2833157.
Phillips, E., Ososky, S., Swigert, B., & Jentsch, F. (2012). Human-animal teams as an
analog for future human-robot teams. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 56(1), 1553-1557. https://doi.org/10.1177/
1071181312561309.
Phillips, E., Schaefer, K. E., Billings, D. R., Jentsch, F., & Hancock, P. A. (2016).
Human-animal teams as an analog for future human-robot teams: Influencing
design and fostering trust. Journal of Human-Robot Interaction, 5(1), 100-125.
https://doi.org/10.5898/JHRI.5.1.
Pina, P., Cummings, M. L., Crandall, J. W., & Della Penna, M. (2008). Identifying
generalizable metric classes to evaluate human-robot teams. In C. R. Burghart, &
A. Steinfeld (Chairs (Eds), Proceedings of metrics for human-robot interaction
workshop in affiliation with the 3rd ACM/IEEE international conference of
human-robot interaction (HRI 2008). Amsterdam, The Netherlands.
Ranzato, A. J., & Vertesi, J. (2017). From I-robot to we-robot: Effects of team structure
on robotic tasks. In M. F. Jung, S. Sabanovic, F. Eyssel, & M. R. Fraune (Chairs
(Eds), Robots in groups and teams: A CSCW 2017 workshop. https://hri.cornell.
edu/robots-in-groups/.
Richert, A., ShehadehMüller, S., Schröder, S., & Jeschke, S. (2016). Robotic
workmates: Hybrid human-robot-teams in the Industry 4.0. In R. M. Idrus, & N.
Zainuddin (Eds), ICEL2016-Proceedings of the 11th International Conference on
e-Learning: ICEl2016. Academic Conferences and Publishing International
Limited.
Robert, L. P.Jr. (2018). Motivational theory of human robot teamwork. International
Robotics & Automation Journal, 4(4), 248-251. https://doi.org/10.15406/iratj.
2018.04.00131.
Robert, L. P.Jr., & You, S. (2015). Subgroup formation in teams working with robots. In B.
Begole, J. Kim, K. Inkpen, & W. Woo (Eds), Proceedings of the 33rd Annual ACM
Wolf and Stock-Homburg 1741
Performance Metrics for Intelligent Systems Workshop (pp. 251-257). ACM, New
York, NY, USA. https://doi.org/10.1145/2377576.2377622.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016a). The impact of POMDP-generated
explanations on trust and performance in human-robot teams. Proceedings of the
2016 International Conference on Autonomous Agents & Multiagent Systems
(pp. 997-1005). International Foundation for Autonomous Agents and Multiagent
Systems, Richland, SC.
Wang, N., Pynadath, D. V., & Hill, S. G. (2016b). Trust calibration within a human-
robot team: Comparing automatically generated explanations. HRI’16: The
Eleventh ACM/IEEE International Conference on Human Robot Interaction.
IEEE. https://doi.org/10.1109/hri.2016.7451741.
Wang, N., Pynadath, D. V., Rovira, E., Barnes, M. J., & Hill, S. G. (2018). Is it my
looks? Or something I said? The impact of explanations, embodiment, and ex-
pectations on trust and performance in human-robot teams. In J. R. C. Ham, E.
Karapanos, P. P. Morita, & C. M. Burns (Eds), Persuasive Technology (pp. 56-69).
Springer International Publishing. https://doi.org/10.1007/978-3-319-78978-1_5.
Wang, J., Wang, H., & Lewis, M. (2008). Assessing cooperation in human control of
heterogeneous robots. HRI ’08, Proceedings of the 3rd ACM/IEEE International
Conference on Human Robot Interaction (pp. 9-16). ACM. https://doi.org/10.
1145/1349822.1349825.
Williams, T., Briggs, P., & Scheutz, M. (2015). Covert robot-robot communication:
Human perceptions and implications for HRI. Journal of Human-Robot Inter-
actionWilliams, 4(2), 23. https://doi.org/10.5898/JHRI.4.2.
Wolf, F. D., & Stock-Homburg, R. M. (2021). Making the first step towards robotic
leadership – hiring decisions for robotic team leader candidates. ICIS 2021
Proceedings. https://aisel.aisnet.org/icis2021/hci_robot/hci_robot/2.
Woods, D. D., Tittle, J., Feil, M., & Roesler, A. (2004). Envisioning human-robot
coordination in future operations. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), 34(2), 210-218. https://doi.org/
10.1109/tsmcc.2004.826272.
Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The de-
velopment of a human–robot interaction trust scale. International Journal of
Social Robotics, 4(3), 235-248. https://doi.org/10.1007/s12369-012-0144-0.
Yamaji, Y., Miyake, T., Yoshiike, Y., De Silva, P. R. S., & Okada, M. (2011). STB:
Child-dependent sociable trash box. International Journal of Social Robotics,
3(4), 359-370. https://doi.org/10.1007/s12369-011-0114-y.
Yazdani, F., Brieber, B., & Beetz, M. (2016). Cognition-enabled robot control for
mixed human-robot rescue teams. In E. Menegatti, N. Michael, K. Berns, & H.
Yamaguchi (Eds), Intelligent Autonomous Systems 13. Springer International
Publishing. https://doi.org/10.1007/978-3-319-08338-4_98.
Yi, D., & Goodrich, M. A. (2014). Supporting task-oriented collaboration in human-
robot teams using semantic-based path planning. In R. E. Karlsen, D. W. Gage,
C. M. Shoemaker, & G. R. Gerhart (Chairs (Eds), Baltimore, MD, US: SPIE
Defense + Security.
1744 Group & Organization Management 48(6)
You, S., & Robert, L. P. Jr. (2016). Curiosity vs. control: Impacts of training on
performance of teams working with robots. CSCW ’16 Companion: Proceedings
of the 19th ACM Conference on Computer Supported Cooperative Work and
Social Computing Companion (pp. 449-452). Association for Computing Ma-
chinery. https://doi.org/10.1145/2818052.2869121.
You, S., & Robert, L. P. Jr. (2017). Emotional attachment, performance, and viability in
teams collaborating with embodied physical action (EPA) robots. Journal of the
Association for Information Systems, 19(5), 377-407.
You, S., & Robert, L. P. Jr. (2018a). Human-robot similarity and willingness to work
with a robotic co-worker. HRI’18, Proceedings of the 2018 ACM/IEEE In-
ternational Conference on Human-Robot Interaction (pp. 251-260). ACM. https://
doi.org/10.1145/3171221.3171281.
You, S., & Robert, L. P. Jr. (2018b). Teaming up with robots: An IMOI (inputs-
mediators-outputs-inputs) framework of human-robot teamwork. International
Journal of Robotic Engineering, 2(3).
You, S., & Robert, L. P. Jr. (2019a). Subgroup formation in human-robot teams
Fortieth International Conference on Information Systems. Munich, Germany.
You, S., & Robert, L. P. Jr. (2019b). Trusting robots in teams: Examining the impacts of
trusting robots on team performance and satisfaction. Proceedings of the 52th
Hawaii International Conference on System Sciences. USA: Maui, HI.
Zaheer, A., McEvily, B., & Perrone, V. (1998). Does trust matter? Exploring the effects
of interorganizational and interpersonal trust on performance. Organization
Science, 9(2), 141-159. https://doi.org/10.1287/orsc.9.2.141.
Zheng, K., Glas, D. F., Kanda, T., Ishiguro, H., & Hagita, N. (2013). Designing and
implementing a human–robot team for social interactions. IEEE Transactions on
Systems, Man, and Cybernetics: Systems, 43(4), 843-859. https://doi.org/10.1109/
TSMCA.2012.2216870.
Author Biographies
Franziska Doris Wolf is a Ph.D. student at the chair for Marketing and Human
Resource Management at the Technical University of Darmstadt, Germany. Her re-
search interests include the introduction and establishment of human-robot teams in an
organizational context franziska.wolf@bwl.tu-darmstadt.de
Ruth Maria Stock-Homburg is Professor of Marketing and Human Resource
Management at the Technical University of Darmstadt, Germany and founder of the
leap in time Research Institute. She holds a Ph.D. in Economics and a Ph.D. in
Psychology. Her research interests include Human-Robot Interaction, Future of Work,
Leadership and User Innovation. rsh@bwl.tu-darmstadt.de