Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Bonding with a Couchsurfing Robot: The Impact of Common Locus on Human-Robot Bonding In-the-Wild

Published: 15 February 2023 Publication History

Abstract

Due to an increased presence of robots in human-inhabited environments, we observe a growing body of examples in which humans show behavior that is indicative of strong social engagement towards robots that do not possess any life-like realism in appearance or behavior. In response, we focus on the under-explored concept of a common locus as a relevant driver for a robot passing a social threshold. The key principle of common locus is that sharing place and time with a robotic artifact functions as an important catalyst for a perception of shared experiences, which in turn leads to bonding. We present BlockBots, minimal cube-shaped robotic artifacts that are deployed in an unsupervised, open-ended and in-the-field experimental setting aimed to explore the relevance of this concept. Participants host the BlockBot in their domestic environment before passing it on, without necessarily knowing they are taking part in an experiment. Qualitative data suggest that participants make identity and mind attributions to the BlockBot. People that actively maintain a common locus with BlockBot by taking it with them when changing location, on trips and during outdoor activities, project more of these attributes than others.

1 Introduction

Robots are entering human-inhabited environments in growing numbers. However, we lack robust theories on how people make sense of these new robotic co-inhabitants and collaborators [84, 89]. Jung and Hinds note that research on human-robot interaction has been ‘dominated by laboratory studies, largely examining a single human interacting with a single robot’ - causing a disconnect between these studies and the socially complex environments robots are aimed to be - and exceedingly are - placed in [47]. Several authors have therefore argued to supplement controlled laboratory research with field studies on human-robot interaction in the real world [7, 84, 91].
Recently, several such studies have been conducted to increase our understanding of human-robot interactions ‘in the wild’, for example in care homes [32], school classrooms [2], and in homes with children [69] and the elderly [41]. The humanoid robot Kaspar spent over a year in a special nursery for autistic children [91]. Furthermore, robot field studies have studied how humans in public space react to robots that approach them [107], listen to them [74], greet them in hotel lobbies [73], or give tours in museums [46].
Levillain and Zibetti claim that strong realism in either human or animal-like appearance, or autonomous movement and interactive behavior, allows a robot to reach a ‘social threshold’, where humans experience its presence as that of another social agent and are disposed to socially interact with the machine [58]. While there is ample laboratory-based evidence that appearance and behavior are important factors in establishing a robot as a social agent, there is a growing body of cases ‘in-the-wild’, in which humans show behavior that indicates a strong social engagement towards robots that do not possess a life-like appearance nor show life-like movement or behavior [1, 10, 11, 31, 40, 61, 102]. How can we explain these cases?
One common element that we observe that unites these cases is that such robots typically reside in close proximity to humans, in a shared space, often for prolonged periods of time. We hypothesize that this act of living or working with a robot for an extended period of time can facilitate an experience of a robot as a social agent. Sharing space and time leads to shared experiences, which are further reinforced by increased attribution and projection of lifelike qualities such as identity, emotions, desires, and intentions. This ultimately leads to increased human-robot bonding, even if the robot is not lifelike in appearance or behavior. We call this principle the common locus: the sharing of time, space, and experiences with a robot causing the human to bond with the robot.
As an initial playful, qualitative exploration of this under-explored principle, we designed and deployed a series of original robotic artifacts called BlockBots. The BlockBots are small, abstract, minimal cube-shaped robots with minimal human-like appearance and behavior, aiming to maximize a common locus with humans (Figure 1). Our methodology to generate interaction with the BlockBot is inspired by hitchBOT: a robot that hitchhiked across Canada, the U.S., and several European countries [87]. However, we have taken an even more open-ended approach to study how humans interact and bond with these robots in the wild, by presenting BlockBots as autonomous creatures that want to meet people and travel the world, and letting them ‘couchsurf’ from owner to owner. The goal of this study is not to present generalizable knowledge on human-robot interaction or substantive proof for the common locus, but to identify a potentially relevant variable through a novel design methodology which can be further explored and scrutinized in future research.
Fig. 1.
Fig. 1. Three BlockBots.
Since the concept of common locus emphasizes the experience of a shared life within a human-inhabited environment, we consider it appropriate to use this open ended, ‘in the wild’, field-study-based approach. Participants can keep the BlockBot as long as they wish, and then pass it on to whomever they want for the next step on their journey. The BlockBots are thus hosted by an autonomously increasing chain of participants, in a natural setting, without set goals or supervision by the experimenters. Participants have the possibility to send messages to a phone number attached to the back of the box, to let the so-called parent of the bot know how it is doing. This qualitative data has been analyzed to study how participants engage with the robotic artifact and whether the robotic artifact is perceived as a social agent. Regardless of its minimal design and lack of behavior, we observe that participants are disposed to socially interact with the BlockBot.
Note that this approach is different from hitchBOT’s experimental set up, which was intentionally less open-ended as hitchBOT was given a target destination and an activity bucket list, and information on its travels was shared on Instagram and Facebook. BlockBot hosts were free to do what they wanted with it, including ignoring it, and in principle did not know they were taking part in an experiment. Also, apart from a smiley face, BlockBots are abstract in appearance and very minimal in their behavior: only ‘awake’ or ‘asleep’ while charging.
The remainder of this paper is structured as follows. Section 2 reviews related work on human-robot bonding. In Section 3 we introduce the concept of common locus. Section 4 discusses our methodology, the design and deployment of the BlockBot. We then present the results of our field experiment (Section 5), followed by a discussion of the results, known limitations of our study, alternative explanations for the results, and suggestions for potential future research directions (Section 6). Section 7 concludes the paper.

2 Background

In this section, we first reflect on the definition of a robot, and then discuss anthropomorphism, human-robot bonding and the mechanisms of appearance and behavior that allow a robot to pass the social threshold.

2.1 What is a Robot?

There is no universally agreed upon definition of a robot. Most often, a robot is defined along its technical capabilities: its capacity to autonomously sense and act in the physical world and achieve goals [64]. Most of the definitions can be argued to face demarcation problems. For example, the former definition includes examples of machines that are rarely referred to as robots, such as a vending machine, and excludes remote-controlled vehicles such as drones or bomb disposal robots, since their tele-operated nature makes them not ‘autonomous’.
While the definition of what makes a robot is not the main topic of this paper, we would like to argue here that it is beneficial in the context of human-robot interaction to think about robots, not along mechanical and ontological lines, but along the lines of how they appear (in the broadest sense of the definition) to humans, as Coeckelbergh proposes [16], or whether and what we project on them, in the spirit of Dennett’s intentional stance [23]. Don Ihde has also argued that our relation to robots can be understood as an alterity relation, as relating to technology as a (quasi-) other [44]. More in terms of media in general, Nass and Reeves state that people tend to interact with media as if they interact with other persons [79].
Therefore, when considering a definition of robots, it is perhaps more useful not to make a technical distinction between objects, machines, and robots, based on physical properties of the object or the mechanical nature of the interaction, but rather based on the relation humans have with the robotic technology in question. One could make a distinction between robots, machines or artifacts that are ‘subjectified’ - objects onto which humans project features such as identity, state of mind, emotions, motivations - and objects that are not: machines that appear to people as ‘subjects’ versus machines that appear to people as ‘objects’.
This distinction is not unwarranted. Nass and colleagues already showed in 1994 that humans treat computers as social actors [71]. It seems that robots inhabit a new ambiguous ontological category, separate from machines in general. Damiano and Dumouchel note that people across age groups perceive robots as ‘ambiguous objects’ in relation to traditional ontological categories [19]. Robots seem to inhabit a place on a spectrum between alive and not alive, sentient and not sentient, intelligent and not intelligent [48]. Defining some robots as ‘subjectified’ artifacts could, therefore, be argued to be more in line with their perceived ambiguous character.
This means that there could be many subjectified objects, let us call them artificial creatures. Most robots, specifically the ones that we interact with or would want to develop a relationship with, such as social robots, could be seen as artificial creatures. This also means that research in artificial creatures is relevant for human robot interaction research, as many robots are examples of artificial creatures, at least to some degree. There are also artificial creatures, let us say a teddy bear, that do not fit the classical definition of a robot. Likewise, there could be robots by the classical definition which are not subjectified objects. For example, a milking robot or surgical robot could be a mere tool and object to someone, while it may be different to a farmer or a surgeon. This is not problematic, because, as discussed, we do not define the artificial creature in isolation, but rather in a relationship to a human. Note that there could also be objects that we develop an attachment to, but that we do not subjectify. An example might be a watch that I inherited from my grandfather. These are not the types of objects that fall within the scope of our study.
For the purposes of this paper, in order to mitigate the debate of defining what a robot is, we understand the common definition of a robot as a mechanical artifact or system in the physical world that possess a certain level of autonomy, agency, and sensory capacities. Hence, we have refrained from calling BlockBot a robot. Instead, we have called it a robotic artifact. This definition is meant to refer to the fact that the object has certain qualities that could potentially evoke a connotation of a robot but might not qualify as a robot according to classical definitions [64]. But, given our focus on artificial creatures, one can imagine it will be interesting to research robots, robotic artifacts and creatures that actually display minimal human appearance and behavior, to get a better grasp of what else could induce this kind of projection in humans. From that perspective, it is less important to us that the artifact looks like a human, behaves likes a human, or as a matter of fact, looks or behaves like a classical robot - we are rather looking for the opposite that would still pass the social threshold.

2.2 Anthropomorphism

People tend to project human-like qualities or traits such as emotion, agency, or intention on artifacts. This tendency is known as anthropomorphism [58, 106]. Anthropomorphism has largely been viewed as a cognitive bias in other fields of study [26, 58]. However, the concept generally fulfills a positive and central role in the fields of human-robot interaction and social robotics and is theorized to lead to a natural and intuitive interaction between humans and robots [19, 27, 35].
In cognitive science and philosophy of mind, thinkers such as Dennett do not see anthropomorphism as a human bug, but as a feature. He argues that taking an intentional stance, i.e., projecting beliefs, desires and intentions on the other is an ecologically useful and computationally economical way to understand and predict behavior of ‘the other’ [23]. This does not mean that anthropomorphism is necessarily desirable. For example, Coeckelbergh has outlined several normative responses to anthropomorphism [17]. For instance, it matters whether the ‘other’, say a robot, was designed to benefit the humans it is interacting with in the first place, or more its designers.
In line with the tendency to project human-like qualities, people have been observed to show rich emotional and social behavior when interacting with robots, for example empathy. A study using electroencephalography (EEG) found that humans are able to empathize with robot pain [90]. Another study using fMRI showed that violent interaction towards humans or robots resulted in similar neural activity compared to an inanimate object, which indicates that humans and robots elicit similar emotional reactions [81]. Similarly, other studies have showed that people were less likely to turn off an agreeable and intelligent machine when it begs for its life compared to one that is not agreeable nor intelligent [4].
There is also ample anecdotal evidence from non-scientific sources regarding humans showing empathy for mechanical artifacts. In an experiment covered in an episode of the radio show Radiolab, participants were more uncomfortable holding a Furby upside down that started to cry than a Barbie doll [62]. In a workshop, participants showed strong reservation about hitting a dinosaur-like robotic toy called Pleo that they had just spent an hour playing with [28]. When the social domestic robot Jibo was discontinued, media outlets reported on people that were mourning its ‘passing’; with some parents having to explain to their children that ‘Jibo was not going to be around anymore’ [9]. After Boston Dynamics showcased the balancing capabilities of their quadruped robot Spot - which showed life-like movement - by kicking it in an online video, there was online outcry over the supposed cruelty displayed [75].
It is commonly claimed that strong realism in either human-like appearance or autonomous movement or behavior “allows a robot to reach the ‘social threshold’, where humans experience its presence as that of another social agent and are disposed to socially interact with the machine.” [58]. The balance between these factors is hypothesized to be asymmetrical: behavior seems to be a stronger factor for a robot to pass the social threshold than its appearance [58]. We discuss these categories briefly in the next part of this section.

2.3 Appearance

Appearance-based strategies can be clustered into three categories: abstract, animal-like robots, and human-like robots. A robot’s abstract appearance in itself does not function as a social cue, instead such robots often use a behavioral approach to solicit social engagement (for further reading see: [22]).
There are conflicting studies about to which capacity the human form functions as a positive social cue for human-robot interaction. One study showed that humans react more empathically towards robots that are more human-like compared to non-human-like appearances [96]. On the other hand, the human form can raise expectations about intelligence, intent, agency, and physical capabilities that contemporary robot technology might not be able to deliver on, which in turn might generate negative feelings [70]. Regardless, there have been several projects developing robots with a highly realistic human appearance such as the android clone of Hiroshi Ishiguro [42] or Bina48 developed by Hanson Robotics [39]. Unsurprisingly, the appeal of sex robots is also largely tied to their human-like appearance [92]. To map out the concept of human-like appearance, the ABOT database has constructed three distinct dimensions of human-like appearance - body-manipulators, face, and surface - based on a collection of 251 real-life robots [78]. One can also generalize from visual appearance. For example Schreuter et al. demonstrated that people showed higher levels of conformance to a conversational assistant with a humanlike voice versus a text based assistant [85].
Another appearance strategy explores animal-like forms. Research suggests that humans have a more positive attitude towards animal-like or toy-like robots than human-like robots [24]. For example, the robot seal PARO stimulates feelings of attachment and engagement [63] and children formed a very quick attachment to the robot dog AIBO [99].

2.4 Behavior

Realistic behavior or movement is believed to be a stronger cue for social interaction than appearance [58]. Even the simplest movement can give the impression that it is carried out with intent, is pursuant of goals and the result of some sort of intelligence, and can influence the level of empathy humans feel for a robot [21]. One study showed that humans project intentions on simple geometric shapes that move around a screen [43]. Braitenberg argued that even the simplest movement of small robotic creatures in reaction to their environment can lead to the attribution of complex behavior: movement towards a location seems indicative of interest and therefore curiosity, while movement away from a source seems to indicate fear or disgust [8]. A good example of this is iRobot RoombaTM - a non-distinct disc-shaped vacuum robot that moves around to clean floors, while avoiding obstacles. Studies in long-term human-robot interaction showed that people are eager to personify their vacuum robot with names and ascribe personal traits to the robot [29, 30, 89].
Movement has also been a central strategy for imbuing artifacts with robotic life. Adding a reactive potential to non-anthropomorphic robotic artifacts reinforces the possibilities of their apparent behavior [58]. These artifacts are known under several monikers: Objects with Intent [82], familiar domestic object robots [12], robjects [80], the object-based robot design approach [53], or abstract robotics [22].
Interestingly, fallibility or helplessness in robots can also facilitate certain attributions. For example, when an AIBO robot dog trips while walking, it is not perceived as malfunctioning but endearing [51]. The Tweenbot, a cardboard robot on wheels, was able to reach the other side of Central Park, New York while only being able to drive forward. Its destination was written on a flag it carried [52]. The robot would continuously get stuck, but passers-by would pick it up and move it in the right direction.
Aside from motion, the ability to communicate emotion is also a strong catalyst for social engagements. A robot mimicking facial expressions strongly influences the level of human empathy [38]. A robot that adapts its mood to the mood of a human, through facial and verbal expressions of a robot head, will also increase a feeling of helpfulness towards that robot [37]. Robots that behave emphatically themselves are perceived as friendlier [55, 77].
Lastly, the use of spoken or written language has also been a mechanism to establish a social interaction. In a domestic setting, disembodied chat-bots or monolithic home assistants such as Amazon Alexa rely on spoken language as a social cue. Robots that need to communicate frequently and clearly often rely on natural language to establish a social interaction [88]. In one study, participants were hesitant to turn off a robot when it begged for its life [4].
Artistic projects have also deployed behavior as a social cue to solicit attributions of curiosity or feelings of empathy (for an extensive collection of these kinds of robots see [97]). The interactive robotic sculpture Senster created by artist Edward Ihnatowicz, moved its ‘head’ towards sources of relative loud sound and low levels of movement which caused exhibition guests to attribute curiosity and a more complex intelligence to Senster than actually programmed [105]. The Beggar Bot used human language to successfully entice people to give money to it - even in the presence of actual human beggars [93].
Strategies for designing robot sociability can thus be characterized as a spectrum between high or low appearance cues and/or high and low behavioral cues. In the next section, we will introduce a novel, additional variable: the common locus.

3 Common Locus

There is a growing number of examples where humans form a strong bond with - or at least engage socially with - robots that are explicitly machine-like in their appearance, express limited to no behavior, and have a primary non-social function. However, these robots have emerged as social agents nonetheless. For example, one study from the University of Washington that researched the interaction between US military Explosive Ordnance Disposal personnel and their bomb disposal robots found that soldiers make rich psychological attributions to these robots and form strong bonds [10]. Interestingly, these robots are not capable of autonomous movement and look distinctly machine-like: often not more than four wheels with a robotic arm and a camera. Nonetheless, soldiers attribute a gender and nickname the robot and show behavior indicative of grief and sadness when one is lost in action [10]. Similarly, a study researching what language was used on social media about the discontinuation of the NASA Mars rover Opportunity found that people “verbally mourn robots similar to living things” [11]. This is surprising since Opportunity is a remote-controlled vehicle that was not designed to evoke social behavior.
We also find several examples of human bonding with robots that were not designed to solicit a social engagement in non-scientific sources such as media articles. A MARCBot, a type of bomb disposal robot which was nicknamed Boomer, was reportedly given a burial and gun salute after exploding on duty [31]. When the NASA Mars Rover Opportunity was discontinued, the crew sent it a farewell song and the press statement reportedly ‘amounted to a funeral’ [40]. The Canadian Broadcast Company (CBC) reportedly threw a retirement party for their five bulky mail delivery robot colleagues which employees had named and attributed a personality to. During the retirement party, employees discussed shared experiences with and memories of the robots, such as one robot blocking the door while a presenter was late for a live broadcast [102].
The aforementioned robots are explicitly machine-like in their appearance and express limited to no behavior. However, we observe such robots are situated in highly social environments such as places of work, the home and war zones which allows for frequent human-robot interaction. This indicates that a social setting can be an important factor for non-social robots to emerge as social agents, which is in line with research showing that people operating closely to robots ‘frequently use anthropomorphic language’ about those robots [13].
This paper theorizes that the act of living or working in close proximity or cooperation with a robot for an extended period of time can facilitate an experience of a robot as a social agent, regardless of whether this robot possess low appearance and low behavioral social cues. We propose an additional variable for a robot to pass the social threshold to be perceived as a social agent: the concept of a common locus.
The common locus is a term introduced by Nikolaos Mavridis in an interview with Wired journalist Emmet Cole in the context of long-term human-robot interaction [65]. In this interview, Mavridis hypothesizes that the “concept of ‘Sharing’, and more specifically building and maintaining a metaphorical ‘Common Locus’ ... forms the backbone of a meaningful and sustainable Human-Robot relation” [65]. Mavridis argues that what unites two friends is all the shared elements between them - a Common Locus - which grows over time. According to Mavridis, this Common Locus is made up of: “shared memories (what they have lived together, and what they have experienced in common), their shared acquaintances and friends (given that we don’t live in isolation; but are deeply embedded within our social network), their shared interests and tastes; including also more fundamental shared elements, such as a shared language of communication” [65]. This idea functioned as the basis for Mavridis and colleagues to develop a robot called Sarah the FaceBot that would exploit online published information, such as from Facebook, to create a pool of shared memories and shared friends [66, 67]. While interacting with Sarah the Facebot, the robot would, for example, refer to past events that the human and the robot were both present at, or mention that it had seen a common friend the other day.
While the term ‘common locus’ in this context seemed to be limited to this interview, we believed that the concept aptly described a crucial factor for human-robot bonding. This study provides a more elaborate definition of the common locus. Specifically, we stress the importance of frequent interaction through time and a spatial proximity, both allowing frequent interaction to maintain and strengthen the common locus. We hypothesize four potential sub-components that would facilitate a common locus:
(1)
An explicit or implicit decision to treat a certain entity as a subject, at least to some degree (really believe this, suspend disbelief, or play along);
(2)
A shared space or close proximity for the human-robot interaction to happen;
(3)
Shared time or a repeating opportunity for engagement with the robot through time and;
(4)
The perception of shared life experiences which includes, for example, specific encounters, events, activities and conversations that a robot and a human are both present for, certain goals that a human and a robot both work towards and interacting with similar social contacts;
We hypothesize that these components facilitate a common locus between a human and a robot and allows people to have the impression that they have ‘shared’ memories, social contacts and experiences with the robot, which could potentially lead to the projection of life-like attributes.
The concept of common locus could provide a novel, additional dimension in establishing, understanding and describing human-robot relations, which can be especially relevant in environments that depend on close cooperation with robots [18, 103]. If a social interaction or bond with a robot is a feature that is preferable by design, being aware that a (prolonged) proximity of a robot in the social life of a human can facilitate social engagement might in turn inform the design and usage of such a robot. Of course, strong human-robot engagement could also be used for intentional unwanted goals or cause unintentional consequences such as a scenario in which a human puts him/herself in harm’s way to ‘save’ a robot. In these types of situations, the common locus implies that the presence of robots in the proximity of people should be curbed, since merely removing anthropomorphic qualities could be insufficient to prevent human-robot engagement or bonding.
For completeness, we do not want to imply that only appearance, behavior and common locus will lead to subjectification, that would be an overly simplified view of the complexity of relationships between humans and the outside world, and how these can develop. Nor do we want to imply that common locus is a necessary condition for subjectification to arise. For example, it can also happen instantly, as is the case in many of the best cybernetic works of art. In such a context the experience better be instant, as long-term conditions for common locus are infeasible. If anything, we want to inspire researchers to look further than appearance and behavior, which are already abundantly studied, and go beyond mirroring humans in general. There will be more factors at play, including characteristics of the human rather than the robot, and these factors will also interact.

4 Methodology

In this section, we discuss our research methodology and questions, and BlockBot design and deployment. As an initial exploration of the concept of common locus, we aim to design a robotic artifact that is able to facilitate a shared space, time and perceived experience between it and a person, but displays low life-like realism in appearance or behavior (see Figure 2) and be able to be deployed independently outside the laboratory. The result of this process is the robotic artifact called BlockBot.
Fig. 2.
Fig. 2. The design aim for BlockBot.

4.1 Research Method: In-the-Wild Field Study

Since the concept of the common locus puts emphasis on the experience of a shared life within a human-inhabited environment, we consider it appropriate to use an ‘in-the-wild’ field study approach [7, 84, 91]. Jung and Hinds note that research on human-robot interaction has been ‘dominated by laboratory studies, largely examining a single human interacting with a single robot” [47]. This has led to a large body of scientific research on which technical mechanisms affect human-robot interactions. However, there is a disconnect between these studies and the socially complex environments robots are aimed to be - and exceedingly are - placed in [47]. Laboratory studies do not ‘provide insights into the aspects of human-robot interaction that emerge in the less structured real-world social settings in which they are meant to function’ [84], and hence may lack ecological validity. Controlled laboratory studies offer helpful insights, but would benefit from being supplemented with studies on human-robot interaction outside the laboratory in the proverbial wild [7, 84, 91].
Robot field studies have, for example, focused on how humans react to robots in public spaces that approach them [107], listen to them [74], greet them in hotel lobbies [73], or give tours in museums [46]. The humanoid robot Kaspar spent over a year in a special nursery for children with autism [91]. Creative robotics projects such as hitchBOT [87] or the BeggarBot [93] have also employed an ‘in-the-field’ approach to have people interact with relevant human-robot interaction topics - such as trust between humans and robots in a natural setting. Giusti and Marti note that social robotics is ‘an extraordinary opportunity to design technologies with open-ended possibilities for interaction and engagement with humans’ [34]. Citing William Gaver, they argue that systems that are designed to be open-ended can lead to ‘an intrinsically motivated and personally defined form of engagement’, instead of an ‘experience to be passively consumed’ [34]. In an interview, the creators of hitchBOT mention that the overall result from their experiment led them to theorize: “that robotic technologies that afford creative shaping by their users are more likely to become socially integrated” [104].
Obviously, there are also inherent limitations of such an open, in-the-wild set up. It is harder to carefully control conditions and gather experimental data. We do not position our study as a quantitative empirical study, proving necessary conditions. It is a qualitative empirical study at best, and less aimed at providing answers, and more at providing proof of concept and possibility, and stimulating questions and directions for theory formation and future research, both in terms of the research question and topic, as well as in terms of the in-the-wild method used. Also, as described below, we have developed multiple iterations of both artifact and method, so design research can be seen as a secondary method used. In future, studies like these could be extended with ethnographic, ethological, or ecological approaches, but this was outside of the scope of this paper.
Following these proposals, we designed BlockBot to be open-ended and ambiguous in terms of functionality, which allows participants to actively and creatively shape its social role. Placing the BlockBot in domestic settings outside of our supervision or control, aims to ensure a naturalistic engagement of people that is as close as to how they would interact with other robotic technologies in a domestic setting. We deliberately chose to not give BlockBot a social media account or a similar insight for participants to its social history - where it had been, with whom and what it had done - to ensure that each interaction with the BlockBot was as natural as possible and not informed by previous encounters. Hence, we had a more controlled environment than hitchBOT for example [87]. Obviously, there are trade-offs that come with this approach, and we expand on our methodological shortcomings in the discussion section (Section 6).
BlockBots are presented to participants as autonomous creatures that want to meet people and travel the world, inspired by projects such as hitchBOT [87]. Participants can keep the BlockBot as long as they wish, and then pass it on to whomever they want for the next step on their journey. BlockBots are thus hosted by an autonomously increasing chain of participants, in a natural setting, without set goals or supervision.
Qualitative data is gathered through WhatsApp communication. Participants can send messages with their thoughts, feelings and activities with the BlockBot, as well as photos to a phone number posted on the back of the box. This qualitative data is analyzed based on how participants engage with the robotic artifact and whether they ascribe identity and mind attributions to the BlockBot. We feel that WhatsApp is an ideal medium for this purpose. The association with ‘what’s up’ is not random, people generally use it to let others know how they are doing, there is a very low threshold to using it which is key and given its informal nature we are aiming to get responses that are as close as possible to the experience itself. Obviously, it is also medium where people can express themselves visually through photos, not just text, so for us it is a very useful medium for ‘ecological’ self-observations.

4.2 Research Questions

Our high-level research questions are the following: Is it possible for people to develop a relationship with a robotic artifact, in the sense that they subjectify the artifact, even if it has low levels of human-like appearance and behavior? And could common locus, i.e., the sharing of space and time, leading to perceived joint experiences and activities, be a factor in developing this bond?
Our theoretical foundations have been illustrated in the previous sections, but to single out one of the most salient ones let us return to Dennett’s intentional stance. In his book ‘The Intentional Stance’ he states: ‘Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.’ [23]
This quote helps to organize the more detailed questions we would like answers to. At a research method feasibility level, we want to know whether people are willing to host the BlockBots, engage with them, provide sufficiently rich and frequent feedback, and pass them on. To get an idea for whether people would develop a relationship, we want to know what kind of shared activities and experiences they engage in and with what frequency. To get a sense for whether the relationship is such that there is a level of subjectification of the bot, we can follow Dennett’s quote. First, do these bots pass the initial threshold in terms of the humans deciding explicitly to treat the bot as a subject? For that we are using as a proxy whether there are indications that some form of identity is attributed to the bot. But then, to understand the relationship better, we want to better understand the kinds of attributions people make that could indicate goals, beliefs, desires, emotions, intentions, social needs, and so on. Finally, to understand common locus better, we want to zoom into these shared experiences and perhaps differentiate between participants with high and low degrees of common locus.

4.3 Preliminary Study

Leading up to and during the design process of the BlockBot, we conducted a preliminary, open-ended tinkering experiment [54], called TouristBox to gather initial insights on BlockBot’s future design. The TouristBoxes were low-cost impromptu cardboard creatures that were left out in public on the Dutch island of Schiermonnikoog. The TouristBox was made of a cardboard box, had cardboard strips as limbs and a smiley drawn with a marker as a face. Most importantly, TouristBox held a sign that said (original in Dutch): ‘I want to see the island. Will you take me along? Let my parents know how I am doing at: [phone number]’.
The TouristBox solicited some very interesting engagement and thereby provided a valuable initial expectation of what actions and attributions we could expect from participants. People showed motivation to save it from the rain, ‘feed’ it, touch it and patch it up. They took it along on their holiday, took photos of it at several locations, engaged in several activities and wrote about its well-being, state of mind and its location. The inanimate, anthropomorphic cardboard box managed to ‘see’ the island, meet people, and survive out in the wild for at least two weeks. Participants seemed to shape the social role mainly as a fellow traveler that joined their holiday for a while before letting the box go its own way again.
In contrast to the TouristBox, we decided not to leave the BlockBot in a public space, but rather have it passed on between participants. The reason for this was economic. Since the BlockBot was considerably more (time) costly to produce, leaving the robotic artifact in public space would potentially result in BlockBot being destroyed by people or the elements before it could generate any insights. Consequently, participants needed to adopt an active role in finding the next participant, which potentially heightened the bar for audience participation. The expected trade-off was that the BlockBot would ‘travel’ slower between participants but was in less danger of being destroyed or lost between two participants compared to the TouristBox.

4.4 BlockBot Design

There are several considerations that went into the design of the BlockBot (see Figures 1 and 2). First, the design of the BlockBot aims to minimize anthropomorphic or zoomorphic appearance and as well as its life-like behavior, not to remove it completely, and evoke robot-like connotations. The cubical body, the charging cable and the sleek monotonous color add to the BlockBot’s artificial look, comparable to a monolithic home assistant. We decided to keep the simple face as part of the design of the BlockBot, providing us with an element that provided both a minimal sense of realism of appearance and behavior, and trigger a initial decision of the host to treat the item as a subjectified object. BlockBot cannot move but it is able to change its face to a sleeping face while charging, which gives it a minimal notion of behavior. We do not add any other emotions aside from the smile to curb any further realism in the BlockBots appearance and behavior. In Section 6, we discuss the possibility that the face impacted the results of this study and suggest displaying different shapes - or nothing at all - instead of the BlockBot’s face in future studies.
Second, the BlockBot needs to have a size that would make it convenient for participants to keep it close to them, as well as make it easy to pass it on from one participant to the other. The exterior of the BlockBot is a cube of 12 \(\times\) 12 cm (4,7 \(\times\) 4,7 inch). The BlockBot’s size allows for enough space for the display and Arduino Uno micro-controller board inside the bot and is small enough for people to easily transport. A small plastic plaque has been attached to the back - in order to not get separated from the BlockBot - which reads: ‘I want to make friends and see the world. Can I stay at your place for a bit? Please hand me to a friend afterwards! You can charge me using my tail. Let my parents know how I am doing and where I am at: [phone number]’. We have chosen to write the message in English to allow a wider possible demographic to interact with it.
Finally, the BlockBot needs to be durable since it would be deployed unsupervised into human-inhabited environments. While the BlockBot prototype is chargeable, this need has been removed in the second generation BlockBots. This has been done to prevent any potential lithium-battery-induced fire hazards. The BlockBot is made from medium-density fiberboard (MDF) which has been processed with water resistant polish. The display is a Waveshare 2.7 inch e-ink, which is operated by an Arduino Uno micro-controller board. A RobotDyn RTC (real-time clock) with a three-year battery life was added to keep track of time. The USB charging cable now leads directly into the BlockBot. When the BlockBot is connected to power, it checks the independently powered clock to check which face to display. After 6:30, it wakes up. After 22:00, it goes to sleep. When the BlockBot is not attached to power, the e-paper display retains the image. This allows for new code to be uploaded on the BlockBots while assembled, making its capacity as a research platform more versatile. On the downside, the BlockBot will not switch faces unless it is plugged into power. However, the narrative that it should be charged at night potentially solves this problem.
Sourcing components for the BlockBot was straightforward. All technical components were acquired from electronic stores. The MDF exterior was constructed by two woodworkers. However, no extensive woodworking skills are necessary to construct the BlockBot exterior. Any future researchers aiming to recreate the BlockBot for future studies would be advised to leave a side of the BlockBot - in our case the back - dismountable in order to facilitate troubleshooting.

4.5 Deployment

Three BlockBots were deployed in Amsterdam, The Netherlands. To break this pattern, a fourth was released in Nijmegen, close to the border with Germany. The initial seed participants were selected from the author’s own social circle and differed in the social make-up of their domestic setting: a young family, a couple living together, a student, and a self-employed person with roommates. All initial seed participants were within a 22 to 32 age range. The demographic of participants was outside of the authors’ control after the initial seed participants had passed the BlockBot onto a new participant of their choice.
To ensure that the initial participants were not biased in their engagements with the BlockBot, they were given only limited details about the goal of the study. The initial participants were aware that the BlockBots are part of a study on human-robot interaction in human-inhabited environments but were not instructed on the specific aim of this study, such as its research question or hypothesis. Neither were participants specifically encouraged to make attributions of identity or mind. The initial participants were simply instructed to host the BlockBot for a period of their own choosing, encouraged to send an update on the BlockBot via the number on the back of the box in whatever form they chose and to pass the BlockBot to another host that they felt would pass it on themselves as well. To subdue concerns about the BlockBot infringing on participants’ privacy, the initial participants were informed to some degree about the BlockBot’s inner workings, such as the fact that it did not have any sensors and did not store any data. Participants were also told they did not have to worry about the BlockBot’s battery running out but were encouraged to charge it occasionally. It was emphasized that the initial participants could do whatever they liked with the BlockBot short from destroying it. Concerning passing on the BlockBot, the initial participants were requested to not mention that the BlockBot is part of a study or project, but that it had been traveling for a while before them already. This would ensure - as far as possible - that the next series of participants interaction was as natural as possible and as uninformed about the BlockBot as possible. We observe no apparent differences in the data gathered from the initial seed participants or the participants after that would imply that the initial participants were biased in their engagements.

5 Results

In this section, we present the results of our study. To re-iterate, the goal of our research is to explore in an open-ended manner whether it is possible for people to develop a relationship with a robotic artifact, in the sense that they subjectify the artifact, even if it has low levels of humanlike appearance and behavior. And also explore whether common locus, i.e., the sharing of space and time, leading to perceived joint experiences and activities, could be a factor in developing this bond.
The open-ended nature of this study allows for indefinite gathering of qualitative data, since there is no clear end goal to the BlockBot’s journey. The first BlockBot prototype was deployed for six weeks from July 13, 2020 until August 26, 2020. The second generation of three BlockBots were deployed in the field on August 18, 2020, two of which are still currently deployed. For practical purposes, we have included results until October 1st 2020 in this paper. See Table 1 for the total number of participants, and text messages and photos received.
Table 1.
Participant IDNumber of picturesNumber of messagesNumber of videos
P1510
P2120
P3110
P4010
A1341
B1220
B2330
B3320
B4110
B5110
C15101
C2013
C313100
C4100
Total39395
Average2.82.80.4
Stdev3.43.20.8
Median1.51.5-
Table 1. Communications Received for All of the BlockBots
Letters denote different BlockBots and numbers denote different participants in a BlockBot chain.
In the end, each of the four BlockBots had a distinct ‘journey’. One BlockBot did not move from its starting location at all, one roamed in the vicinity of Amsterdam, another around some of the major cities and ending up in the north of the Netherlands, while the last one traveled through Belgium and is currently back in the Netherlands. Based on participant communication referring to failed plans of traveling countries such as France and Norway, we believe that the coronavirus pandemic curbed the BlockBots potential trajectory. One BlockBot was even affected by the COVID-19 pandemic and was stuck in quarantine for ten days due to one of its roommates testing positive.
The results show that most participants are disposed to communicate about the BlockBot (see Figures 35), yet to varying degrees. While some participants only send a few messages and photos or a series of minute long videos, others share detailed page-long reports on how they experience the BlockBot’s presence and how this changes over time. We observe a correlated trend between the volume of messages a participant sends and the amount of attribution that we observe in their communication. Since participants are not instructed to communicate about their engagement with the BlockBot in any particular manner, we suspect this difference is due to individual difference between participants and the intensity to which they respond to the BlockBot’s presence in their life. One observation that we can make is that participants seem to be disposed to communicate with a decreasing frequency and quantity over time (see Figure 6). Often participants send one or two messages with photos after which either the frequency and quantity drops, or they pass it along. Even with participants that are highly frequent communicators, we note a decline in the frequency and quantity of communication over time. We discuss this potential novelty effect further in the Discussion section.
Fig. 3.
Fig. 3. Received images of BlockBot outside.
Through analysis of the received qualitative data, which consists of messages, photos, and videos, we gain a sense of how participants engage with the BlockBot. We discuss our data across three aspects. First, to what extent does the BlockBot manage to establish a common locus with its host. In other words, is the BlockBot actually in proximity to the people that host it, do participants spend time with it and do participants take the BlockBot along for shared experiences? Secondly, what do people report about the BlockBot and their experiences with it? Specifically, we discuss this along two domains that we identified in the available data: identity attributions, which could be associated with the decision to treat the object as a subject, and mind attributions, and the interactions between the two. We also explore the relationship between common locus and these attributions and activities.

5.1 Common Locus

The BlockBot appears to successfully establish a common locus. The robotic artifact is present in the same spaces as participants, over an extended period of time and is present during events inside and outside the home. At times, participants take the BlockBot with them, i.e., they actively maintain common locus. Participants keep the bot in their homes, most often their living room and sometimes bedroom. From the received messages and pictures, we can observe that participants move the BlockBot around the house and take it, for example, to the balcony to sit in the sun with them, to a study to be present while studying or to bed with them at night (see Figure 7). Others seem to leave the BlockBot in one spot in their living room.
Participants also often physically move the BlockBot along with them to maintain a common locus outside of the domestic setting. For example, one participant received the BlockBot in one city where she hosted it for a couple of days and later moved it along with herself to another city. After several weeks, she then moved it back to the original city, where she passed it on. Another participant took it on a trip to a farm in Belgium, back via several Dutch cities and on several other trips. To name a few other examples, the BlockBot joined participants at a construction job, a university robotics laboratory, a barbecue in the park and for a walk in the woods. Taking the BlockBot outside and extra-domestic activities are reported by over half of the total participants (see Figure 7). These examples showcase that the BlockBot solicits in some participants an active engagement in maintaining a common locus by keeping it in close proximity inside and outside a domestic setting, and a tendency in participants to turn experiences or activities into joint experiences or activities.

5.2 Identity Attribution

What do people attribute to the BlockBot? Almost all the participants attribute some sense of identity to the BlockBot. The received messages show that the vast majority of participants refer to the BlockBot with a gendered article (he/she), with a majority attributing a male gender (see Figure 8 for attributions at message level). So far, participants write almost exclusively about the BlockBot in the third person, referring to it as he, she or the (ro)bot. Only one participant refers to it as ‘it’ and one other communicates in the first person and hence refers to the BlockBot as ‘I’.
The vast majority of participants also give the BlockBot a name. Some names clearly refer to an assigned gender, such as Brenda and Jules, while others reflect more on the robotic nature of the BlockBot, for example, Boxxie, Robbie and Botje (Dutch for ‘little Bot’). One participant communicated that she had tried several names before landing on a male name and questioned why she felt that particularly a male name fitted the box. She reported asking people in her social circle to help her name the BlockBot but stuck with her initial choice. Interestingly, the data show that names are also not necessarily retained between participants. For example, after Jules came Rust and after Brenda came Boxxie. This could indicate that participants mainly shape the identity of the BlockBot themselves.
Overall, at participant rather than message level, 71% of participants assign a gender or communicate in first person, 71% assign a name or communicates in first person, and 86% assign at least a gender, name or communicates in first person.

5.3 Mind Attribution

In this section, we discuss mind attributions that are made by participants about the BlockBot - agency, emotions, friendships, preferences and intentions - in order of their perceived quantity in the data at message level (see Figure 8).
Participants make several different attributions that refer to the BlockBot’s ability to act. An often-made attribution is the ability to see. For example, BlockBot likes to observe the room and has seen several cities/sites. Another participant referred to the BlockBot as ‘gaining experience’ and ‘growing up’, implying a certain ability to learn. The BlockBot was also taken to a gym where it was attributed ‘being sporty’, ‘running contests and squash competitions’ and ‘learning new sports’. Generally, participants do not mention that they take the BlockBot to places or put him in certain location. Instead, they use a more action-oriented language, as if the BlockBot is able to move independently. For example: ‘Jules has seen three different cities’ or ‘He has been to the forest with us and we have played a game together’ instead of being taken there.
When attributed an emotion, it is interesting to note that in this experiment the BlockBot is ascribed only positive emotions: happiness and hope. For example, one participant wrote that the BlockBot ‘was happy at the BBQ because it was fun’. Another wrote that BlockBot ‘hopes’ to leave town soon. The latter could be marked as an attribution of emotion (hope) and desire or intention (leaving town).
Several participants attribute a quality of friendship between BlockBot and animals, toys and statues with an animal likeness: a cat, a stuffed animal, a statue and a pig. One participant referred to a BlockBot and a cat as ‘buddies’, another to the BlockBot and a pig as ‘friends’ (see also Figure 4). The BlockBot has also been attributed certain preferences, such as sitting in the sun on the balcony, sitting with the plants or liking to observe people. The intentions that have been attributed to the BlockBot by participants mainly seem to align with its deployment narrative: it wanted to see the world. We received several messages which aimed to communicate the BlockBot’s wish to travel on. The above attributions could have been influenced by several other factors such as BlockBot’s face and deployment narrative that states that it wants to see the world. We discuss the potential effect of other factors further in Section 6.
Fig. 4.
Fig. 4. BlockBot in the field portrayed by participants with real and fake animals.

5.4 Interactions between Common Locus, Activities and Identity, and Mind Attributions

So far, our analysis has mainly been ‘univariate’: discussing each observed variable separately. In this section, we will discuss possible interactions and associations between the variables themselves. Although we are working with quite a a low sample size, these interactions could provide us with speculative, yet interesting insights that could be confirmed further in more extensive follow-up research.
Taking the BlockBot outside could be argued to be an effective way to maintain or build a common locus, since proximity is preserved even outside of the domestic setting (see Figure 9). It will be interesting to see whether people who maintain common locus by taking the bot outside with them also make richer attributions. This does not prove causation, but even if there is a correlation this would be interesting. When we sort our data in two groups, those that do take the BlockBot outside and those that do not, several possible correlations can be observed in terms of attribution.
Firstly, almost all of the participants that take the BlockBot outside, also take the BlockBot on an activity. This can be argued to be unsurprising since most activities take place outside of the domestic setting. Regardless, it shows that most of these participants do not simply put the BlockBot on their balcony or in their garden but also take it on activities.
Secondly, participants who take the BlockBot outside show higher percentages of identity and mind attribution (see Figure 9). For example, the majority in this category attribute a name (75%) and gender (62.5%) to the BlockBot in comparison to only 28.5% and 50%, respectively, in the not-outside group.
While the sample data is low, these two interactions could suggest a relation between actively maintaining a common locus through close proximity outside the domestic setting and project mind attributions. This would be in line with our hypothesis, since we do observe that more attributions are made by participants who take the BlockBot outside - and preserve a common locus - than those who do not.
When we sort the data on participants that name the BlockBot - which provides an indication that a participant wants to build a certain bond or relation with the robotic artifact - one interesting observation that emerges is that there is an apparent hierarchy to the attributions made to the BlockBot (see Figure 10). We observe that, almost exclusively, identity attributions outnumber mind attributions. Aside from the cases where no attributions are made, if there is an attribution of mind, a participant will have given the BlockBot a name and/or gender. This potentially indicates that an attribution of identity may be a necessary condition or enabler to make further mind attributions.
In total, we therefore observe three clear correlations. First, if a participant makes an attribution of identity, it is highly likely that a participant will also make mind attributions and take the BlockBot on activities outside. Secondly, if a participant makes one type of attribution of mind, it is very likely that he or she will also make another mind attribution: attributions begat attributions. Lastly, if a participant takes a BlockBot outside, it is highly likely that he/she will also make mind attributions.
However, these correlations could potentially be explained by an alternative effect. Some participants might just be more inclined to behave in an active manner towards the BlockBot. In other words, there might exist an underlying variable that causes participants to behave in a varying degree of engagement towards the BlockBot, ranging from barely interacting with the BlockBot at all to naming it, taking it on activities and ascribing mind attributions to it. This is not unlikely, since only half of participants project mind attributions or take the BlockBot outside.
For example, when sorting participants on whether they make agency attributions to the BlockBot, we see that only half of participants that had taken the BlockBot outside or on an activity also made an agency attribution (see Figure 11). This could suggest that taking the BlockBot outside or on an activity might have a weak or no influence on mind attributions, since the results could be explained as pure chance: around half of participants could be susceptible to attributing mind qualities to the BlockBot. However, this would raise the question what exactly causes some participants to be susceptible to this kind of active behavior when interacting with the same robotic artifact. One study, for example, showed that people with higher levels of empathy were more positively influenced by a robot with a story than people with lower levels of empathy [21]. Perhaps, the BlockBot triggers a certain personality trait in over half of the participants that causes them to engage in a more active way towards the BlockBot compared to participants that rate less strong on this personality trait.

5.5 Summary of Findings

In conclusion, the results show that most participants demonstrate behavior that can be interpreted as anthropomorphizing the BlockBots, consistent with Dennett’s intentional stance. Despite the fact that the BlockBots score low on human appearance and behavior, the vast majority of participants decides to treat these objects as subjects, as measured by assigning some form of identity (86% of participants), with over half of the participants also projecting mind attributes on to the BlockBot, such as agency, emotion, intention, preference, etc. Hence, this data is indicative of the BlockBot passing a social threshold and a majority of participants experiencing its presence as that of a social agent. Also, participants who name the BlockBot are more likely to make mind attributions.
Participants are motivated to engage with the BlockBot and suggest that the BlockBot successfully establishes a common locus with participants. It is present in the domestic proximity of participants, and a subset of participants show motivation to keep this common locus intact when they move to a different location in or outside the domestic setting, by taking the robot with them. For this group, we are also seeing higher identity and mind attribution rates. This may not imply causation, but at least is an indication of correlation between common locus and attribution.
One should consider the relatively small number of observations and uncontrolled conditions that come with the set-up of an open in-the-wild study, but the additional, more qualitative information gleaned from the texts and images aligns with the thesis that mind and identity attributions are made to BlockBot, and that common locus plays at least some role in this.

6 Discussion

In this section, we provide a deeper interpretation of the results and discuss the implications for the field of human-robot interaction ‘in-the-wild’. We also discuss several alternative factors that could explain our data, methodological limitations of our approach and opportunities for future studies.
This paper theorized that the act of living or working in close proximity, or cooperation with a robot for an extended period of time can facilitate an experience of a robot as a social agent, regardless of whether this robot possesses low appearance and low behavioral social cues. As an initial exploration of this hypothesis and the common locus as a relevant variable, we have positioned a minimal robotic artifact called BlockBot in close proximity to people’s daily lives where they could engage with the entity in a naturalistic and unsupervised manner and self-report on engagements made with the BlockBot.

6.1 Interpretation

The presence of the BlockBot in domestic settings and the full control of the participants over BlockBot engagement provide a unique research setting. The data shows that people are motivated to engage with the BlockBot and communicate about the BlockBot in written and photographic form. The communication about the BlockBot by the participants could be characterized as more similar to that of a social agent than that of a lifeless object. The robotic artifact seems to fulfill the role of a complacent (happy to join for all activities) but restless (wanting to leave and travel on) guest in the lives of their hosts.
The data indicates that the BlockBot does succeed in establishing and maintaining a common locus with participants and that participants are prone to engage socially with the BlockBot. Participants show motivation to keep the BlockBot inside their homes, in close proximity to their daily lives, and maintain a common locus with the BlockBot between different locations and even take it along activities outside the domestic setting.
Of course, a common locus between BlockBot and participant is not a given. One BlockBot journey is particularly descriptive of this phenomenon. BlockBot A was positioned into the domestic setting of a young family (a couple and a young son) living in Amsterdam. After initial communication about the young son with the BlockBot, communication ceased. Eventually, about four weeks later, we received an apology message stating the participant had never passed on the BlockBot, had already too many other ‘creatures’ in his life (girlfriend, child, cat) and apologized for being such a bad parent. Some other participants have also communicated about feeling guilty that they are not taking care or loving the BlockBot as much as they would like to.
Most participants, however, do make the effort to include the BlockBot in their daily lives outside of their domestic setting, which goes beyond its basic request to be hosted and passed on in due time. Participants feel motivated to bring it along to their jobs or on leisure activities during their time off work, take it to social events such as dinners and drinks, travel around with it, even across national borders. This shows that participants feel motivated to share events, spaces, and contacts in their life with the robotic artifact and, hence, build and maintain a common locus between the two.
Furthermore, participants make identity and mind attributions to the BlockBot. Participants almost uniformly refer to the BlockBot as a gendered entity, to which the vast majority give a name. Over half of the participants also ascribe a mind attribution to the BlockBot. One interesting observation is that identity attributions to the BlockBot, such as gender and/or giving the bot a name, seem to strongly outnumber mind attributions, such as agency, emotion, relationships, preferences, intentions. Almost exclusively: if there is an attribution of mind, a participant will have given the BlockBot a name and/or gender. This could point to a potential hierarchy within the attributions made to the BlockBot. Perhaps, an attribution of identity is a relevant factor to communicate about attributions of mind.
However, as we already mentioned in Section 5, the potential cross-correlations between variables might be due to a underlying personal factor (or factors) that triggers some participants to strongly engage with the BlockBot, while other do not seem disposed to this behavior.
The apparent hierarchy in attributions made about the BlockBot, however, is reminiscent of Dennett’s intentional stance, as per his quote in Section 4.2. [23]. The ambiguous nature of the BlockBot’s function, could be argued to force participants to adopt an active role in shaping their engagement with the BlockBot. Initially, the object is framed just enough as a creature so that the participants decide to treat it as a subjectified object and rational agent. Participants formalize this decision by assigning identity attributes such as a name or a gender. Of course, participants know it is an artifact, but as part of the simulation that play is, suspension of disbelief kicks in, and participants literally take an intentional stance. They deduce its beliefs, desires, and goals from the instructions on the back and attribute preferences and behaviors that fit these goals. In other words, after an initial conscious decision to subjectify the object in a playful setting, common locus then leads to shared experiences, which leads to more attribution and the bond forms, perhaps further reinforced by the sunk cost of time invested.
Interestingly, what participants ‘predict the agent will do’ is in some cases not only projected onto the BlockBot but has to be in some cases also acted out by the participant with the BlockBot. If a participant predicts the BlockBot wants to sightsee, they will need to bring BlockBot to the world outside. This further increased the effort vested in the narrative and ‘evidence’ that BlockBot is a creature. This might explain the observed correlation between participants taking the BlockBot outside and on activities and a relative high amount of identity and mind attributions.
In conclusion, we observe that participants are disposed to anthropomorphize the BlockBot - to make attributions of human qualities to the BlockBot - and show motivation to share time, spaces and events with the BlockBot. This suggests that the BlockBot has passed a social threshold and is experienced, to a degree, as a social agent. These observations are in line with our hypothesis and, accordingly, point to the relevance of the common locus.
While the results of our exploration are not conclusive, the data could suggest that a common locus between a person and a robot in itself functions as an important catalyst for a robot to be experienced as a social agent, assuming there was just enough of a trigger for the human to initially decide to treat the artifact as a subject to some degree. A robot’s proximity to a human’s daily life can give the sensation of sharing time, spaces, and activities with a robot, which influences the experience of a robot as a social agent, even if a robot possesses low life-like realism in its appearance and behavior. This is not unlike how humans befriend others who happen to live next door, work in the same team or are a member of the same club. With an increase of robots, robotic artifacts, and behavioral objects in the lives of people, what will be the consequences of this perception?

6.2 Implications for Human-Robot Interaction

Conventionally, when a human-robot interaction is desired, the interaction is not the final goal in itself, but is often there to capture attention or affection and redirect this for a certain purpose. One example is therapy: interaction with a social robot seems to improve social engagement of autistic children in human-human interaction [18]. Another example is teamwork: human-robot teams perform better when humans are emotionally attached to the robot [103]. The common locus could provide a new factor in establishing human-robot relations for these types of cooperation. If a social interaction or bond with a robot is a feature that is preferable by design, being aware that a (prolonged) proximity to a robot in the social life of a human can facilitate social engagement, might in turn inform the design and usage of such a robot.
Likewise, when robots are used in sensitive contexts, such as care, one needs to be very aware of all the factors that influence interaction and relationship building and their side effects, so not just in terms of appearance and behavior, but also common locus. In addition, we can also imagine scenarios in which we explicitly want to prevent these types of social engagements towards robots. While attachment is a valuable quality in human-robot teaming, these attachments could potentially endanger human lives when an attachment to a robot drives a soldier to position himself in harm’s way to rescue its robot [10]. In those cases, if such a social or emotional engagement is undesirable, one could argue that a robot should not only be ‘de-anthropomorphized’ in its appearance and behavior, but a common locus between a human and a robot should be avoided as well, for example by turning it off and storing it out of sight with other equipment when not deployed. Naming the object should be avoided, the use of non-anthropomorphic language should be encouraged, and specific bomb disposal robots should be frequently rotated across units. There also exists a sizable mistrust concerning the use of robots in the care and therapy of children, the elderly and the disabled, and a reprehension to the introduction of robots in other areas such as education, healthcare, and leisure [18]. One concern is that the authenticity of human-robot attachments has possible negative effects on human-human or human-animal relationships (for further reading see [95] and [14]). The idea that a robot’s proximity to one’s daily life could lead to social engagement and even a certain bond, could provide a basis for some to argue that we should keep robots away from humans in order to prevent human-robot attachments deteriorating human-human relations. Obviously, as already frequently mentioned, our research is open ended, so less controlled, and qualitative, but specifically for sensitive areas it is already relevant to point out that common locus can be a potential risk to be looked into more for any of these high-risk implementations.

6.3 Known Limitations

The results presented in this study are by design indicative and explorative, and this paper does not claim to provide any definitive conclusions. In this study we used an open-ended and unsupervised methodology to study how people make sense of a robotic artifact. This choice came at the expense of several of the benefits that a laboratory study offers and limits the number of factors that we can control as well as the quantity and quality of the data that we gather. Accordingly, we encountered several limitations of our methodology, which we will discuss below.
Firstly, the addition of new participants is slow. It takes considerable time for people to host a BlockBot and pass it on. This can be explained by the fact that people are requested by the BlockBot to be passed on to a friend or an acquaintance, which heightens the bar for movement between people.
Secondly, the quantity and frequency of data is irregular and differs greatly between participants. While some participants seem very disposed to communicate about the bot, others seem less inclined to do so. Accordingly, some participants send us two messages and a single photo before passing the BlockBot on, others keep it for three weeks and write three pages full of observations and experiences with the BlockBot. When a participant who receives the BlockBot does not communicate to the phone number, we cannot gather data and we lose track of the BlockBot’s location. Recording the data was not as problematic as receiving data from participants. Once messages, images or videos had been sent to the mobile phone number presented on the BlockBot, this data was labeled and stored for further analysis.
Thirdly, the quality of the data that people send can vary between participants. While some take a very active role towards the bot and attributing life-like qualities to it, others merely describe where it is or do not report much about it at all. This leaves the potential that some engagement was under-reported. Finally, it is important to realize that people that participated in this study chose to do so voluntarily and were willing to host the BlockBot, which could suggest a caring, pro-active nature or a preexisting interest in robots. We are also aware that the demographic make-up of our initial participant pool is not representative for all age groups and ethnic backgrounds. Cross-cultural studies have shown that there are differences between cultures on how robots are perceived [59, 86, 94].
We acknowledge the possibility that asking participants to self-report their attributions and actions with the BlockBot might not paint a complete picture of their social interaction with the robotic artifact and is bound to encourage messages that contain attributions. For this study we have restricted ourselves to the information we received through WhatsApp on purpose, without any meaningful stimuli from the experimenters. A potential solution in future research, that would preserve the initial unsupervised interaction between participants and the BlockBot, would be to actively interview participants about their engagement through interviews or surveys after they have hosted it, or to place a BlockBot at one household for a set amount of time and conduct interviews with participants about their connection to the BlockBot during and after its stay. This approach could reveal more anecdotes, attributions or activities that were not reported on.
Also, we are well aware that participants may want to ‘play along’, actively choose to suspend their disbelief and do not ‘really’ believe the bot has a mind, desires, emotions, intentions and so forth. That said, we kept the task as open ended as possible, without implying that this was a research experiment in human robot relationships. We also do not make any specific ontological claims as to whether participants truly attributed a mind, yet we think it is interesting that people do this even in a purely playful context and see play also as a form of reality. At the end of the day, as Shakespeare put it: ‘All the world is a stage’.

6.4 Future Studies

While our data can be argued to support our hypothesis, the question remains to what extent other factors might have influenced the results. We discuss several alternative or complementary factors: a possible framing effect, a novelty effect, the BlockBots appearance, make-believe play and to what extent the social engagement we observe in the results differs from object attachment. This list is not meant to be exhaustive. We will discuss these possible factors below and make recommendations on how to address them in future work. With an identical set up but just a larger fleet of robots traveling over a longer period of time, we could explore the topic more deeply and draw firmer conclusions.

6.4.1 Framing Effect.

One aspect that potentially influences the results is the possibility that the deployment narrative of the BlockBot already frames participants to perceive it in a certain manner. The BlockBot is presented as an entity that wants to travel the world, stay with people and has parents that people can report to. This narrative mainly functions as a forward-propelling mechanism for the BlockBot in order to increase the number of participants and subsequently to increase the amount of data that we receive. However, does this narrative also function as a framing device that influences the participants’ perception of the BlockBot from the start? This could be studied by experimenting with different framing variants.
The exact mechanisms and effect of framing on robot perception are not yet fully understood. While some studies have shown how (anthropomorphic) framing can affect how people perceive robots [20, 21], other studies found that anthropomorphic framing had no effect on the human-like perception of the robot [72]. Other times, effects are subtle. One study found subtle differences in children’s gaze behavior between a robot that was framed as a social agent or a machine-like being [100], but this difference in perception was not present in the children’s evaluations pre-test and post-test [101]. Another study showed that while framing does affect participants mind perception of a robot in a laboratory setting, this effect is hard to replicate in real-world studies [98].
It is important to point out that participants are not explicitly encouraged or discouraged to anthropomorphize the robotic artifact. Furthermore, the majority of attributions made to the BlockBot are unrelated to its deployment narrative. For example, the narrative does not provide the BlockBot with a gender, name or any preferences. However, one can argue that - even if the attributions are unrelated - because BlockBot is presented as robotic artifact with a goal, that this lowers the threshold for participants to make attributions compared to a situation in which there was no ‘framing’ to begin with. Hence, we acknowledge the possibility that the deployment narrative of the BlockBot frames how participants view the robotic artifact. In future studies, we could, for example, compare different narratives with which we deploy the BlockBot. This could mean different narratives on the back of the BlockBot or different narratives told to the initial participants.

6.4.2 Visual Appearance: Face.

Due to the open-ended nature of this study, which did not isolate single aspects of the BlockBot, the question arises as to whether the BlockBot passes the social threshold based on the common locus or that its arbitrarily low realism in appearance or behavior was ‘high enough’ for it to pass the social threshold based on those qualities (or a combination of them). Critics could, for example, point to the fact that the BlockBot displays a ‘face’ as the main factor as to why we observe participants ascribing identity and mind attributions to the robotic artifact. In general, to push the boundary further, it may be interesting to experiment with lowering visual appearance by simplifying, neutralizing or even removing the smiley face.
A similar argument as with the framing effect can be made here. While there are attributions made that seem to be more related to the BlockBot’s face (the ability to observe, for example), there are also other examples that seem unrelated to this feature, such as having certain preferences. An alternative explanation is that, because the BlockBot is almost always positioned in a safe domestic environment and not exposed to any danger, it might not feel anything but happy in the eyes of participants. In our preliminary study with the TouristBox, which was left outside to be picked up by passersby, some participants did make mind attributions of emotions such as fear and worry regardless of its fixed smiling face.
It also does imply that the same BlockBot would be able to pass a social threshold if participants interact with it in a laboratory setting. Or that the same BlockBot without a face would not be able to pass a social threshold in a common locus setting, which our hypothesis states it would. Due to the nature of our study, we cannot exclude this possibility with certainty and we concede the possibility that the presence of the face could potentially lower the threshold for participants to make attributions that are unrelated to its appearance alone. This establishes the need for a future study in which we deploy a BlockBot with even more abstract faces, such as removing the mouth, or without a face on its display and see whether it would be possible to generate similar results that indicate that the BlockBot has passed a social threshold.

6.4.3 Novelty Effect.

The observed results could potentially be influenced by a certain novelty effect of having an unfamiliar, ambiguous robotic artifact in the social setting of the home. A novelty effect would cause participants to initially be very disposed to engage with a robot and lose interest after its novelty has ‘worn off’. This type of effect has been observed in previous studies [50]. One study which aims to provide a formal model of anthropomorphism considers anthropomorphism as a dynamic concept that evolves over time. This model theorizes that at the start of a human-robot interaction, anthropomorphism spikes due to a novelty effect, before familiarization stabilizes this tendency, only to spike up again due to disruptive and surprising robot behavior [56]. In future work, one may introduce a variant where at very low frequency (say, once a week) the bot demonstrates a certain behavior to keep stimulating engagement or passing on the device (for instance in case of no movement), though this may reduce the open and unbiased nature of our set up and actually increase novelty bias.
In our current data we do observe a frequency pattern that could be interpreted as a novelty effect (see Figure 6). Participants often communicate most about the BlockBot in the early days of having received it or communicate about it on a relatively high frequency at one moment compared to moments later in time. Often participants send one or two messages with photos after which either the frequency drops or they pass it along. Even with participants that are highly frequent communicators, we note a decline in the frequency and quantity of communication over time. One participant that had a BlockBot for over two weeks, notes that her BlockBot “Jules” slowly lost his “magic robot powers” and became a part of the interior, akin to a plant, lamp or printer. It is possible that a novelty effect influences an initial high level of anthropomorphism which in turn affected the type of communication we received from participants.

6.4.4 Make-believe Behavior.

An alternative explanation of pro-social behavior that we observe in the communication with participants about the BlockBot is that these participants engage in a sort of make-believe or play behavior. Such behavior is believed to perform a crucial role in children’s development, emotional health, learning and self-regulation [6, 36, 57, 76]. Participants might be aware that the simple robotic artifact possesses none of the mental states they ascribe to it, yet they suspend their disbelief or engage in make-believe play. In that sense, this playing along behavior does not make the results of our type of studies less real, future research can actually provide additional insight into why people are willing to adopt an intentional stance.
Several authors have applied this framework onto human-robot interaction and draw from the work of philosopher Kendall Walton and his theory of make believe behavior [25, 83]. Walton argued that people more so ignore their disbelief then suspend it. Make-belief, he proposes, entails the creation of a fictional world that participants inhabit. People use objects as ‘props’ to generate such fictional truths [25]. When making attributions to the BlockBot that participants might know are false, they are considering, what Rueben and colleagues call, ‘fictional states of affairs’ when doing so [83]. Within this ‘Waltonian account’, interacting with the BlockBot, or any robot for that matter, is akin to engaging with other media such as literature, theater, or film.
This perspective suggests that the observed behavior is the result of the BlockBot being a convincing piece of fiction. As Novitz wrote: “To believe fiction, we have seen, is to be deceived by it, and while deception may promote an appropriate emotional response, it can never promote a proper understanding of the work. To disbelieve or to discount the work, on the other hand, prevents us from acquiring those beliefs necessary for an appropriate emotional response to it. Rather than respond to fiction by believing, disbelieving, or discounting it, one must respond imaginatively by making-believe. Such imagining, we have seen, is for the most part derivative, and involves thinking of or considering the fictional world described by the author without a mind to the factual vacuity of his descriptions. It is this which allows us to acquire beliefs about creatures of fiction which are capable of moving us”.
However, we should be wary, as Jane McGonigal points out, to be more convinced by the participants ’performances’ then they are convinced by their own make-belief [68]. Writing within the field of pervasive gaming, McGonigal refers to the practice of ‘users’ staging, performing and playing along with an unfolding experience as a performed belief [45, 68]. Jacobsson suggests that such performed belief are fundamental and catch ‘an essential part of how ... owners of robots appear to advance and enrich their experience’ [45].
Finally, we can ask what motivates adults to engage in this behavior, through what mechanisms this make-believe play is reinforced or challenged and what the potential instrumental value of such behavior could be. The intentional stance provides a possible answer. It is called a stance for a reason: it is not necessarily the case that the human needs to believe the other ‘really’ has goals, beliefs, intentions, and so on. The human just actively decides to treat the other as a subject, for example because it will make it easier to predict its actions. Given this, the intentional stance works equally well in ‘simulated realities’, like play, as well as in reality. In other words, intentional stance combined with make-believe behavior and play pose promising avenues for future research.

6.4.5 Object Attachment.

A final question worth considering is to what extent the observed social engagement towards the BlockBot differs from other forms of object attachments. Robots are usually understood as mechanical artifacts that possess a level of autonomy, agency, and choice [49]. The BlockBots presented in this study possess none of these qualities. Hence, why we have referred to the BlockBot as a robotic artifact, as opposed to a robot. However, has our methodology taken such a minimal approach that we have left the field of human-robot interaction and have entered human-object relations?
Objects, tools, and machines function beyond their practical application also as a formative agent of social complexity and experience [3], and function as a ‘major contributor and reflector of our identities’ [5]. Cars, for example, facilitate the formation of memories through road-trips and holidays, contain an idea of ‘freedom’, ‘responsibility’ or ‘status’ and, through years of usage and upkeep, provide users with a certain sense of identity and a potential emotional bond [3]. In this sense, we relate to objects as an extension of our own identity. It is well observed that people from across different age ranges can form strong attachments to objects. Children, for example, often have a ‘transitional object’: a favorite stuffed animal, blanket, or toy that they form strong and persistent attachments to (however, this is not a universal event in child development) [60]. Adults might possess certain emotional objects for which value is not (primarily) derived from its function, but which we find valuable because of what we attribute to it: the watch of a deceased family member that holds certain memories or a medal that signifies one’s competitiveness, physical qualities and persistence. These objects often feel like a part of our identity and some would feel as if they lost a part of themselves if these emotional objects were to be lost or destroyed.
There is a difference between objects (or even living entities) that may simply be regarded as objects, such as a shoebox, a mail sorting robot or an avocado, or even objects one has grown attached to such as house keys and medals, and objects that are subjectified, artificial creatures such as a teddy bear or certain robots like our BlockBot. The difference is in how these artificial creatures ‘appear’ to us and in which ontological category we place them. Damiano and Dumouchel note that people across age groups perceive robots as “ambiguous objects” in relation to traditional ontological categories [19]. Robots seem to inhabit a place on a spectrum between alive and not alive, sentient and not sentient, intelligent and not intelligent [48]. We observe this ontological ambiguity in the results when we analyze what participants portray the BlockBot as (see Figures 4, 5 and 12). Participants do not exclusively place the BlockBot in an ontological category of living or non-living entities, but seem to associate it with humans, animals, plants, toys and inanimate objects. While someone can get attached to an object, this does not imply that this person will regard this object as another entity or subject. You can grow attached to a watch, but this does not mean that you would consider it another entity.
Fig. 5.
Fig. 5. BlockBot in the field portrayed with plants.
Fig. 6.
Fig. 6. Frequency and volume of communication about BlockBots. The Y-axis denotes the number of combined messages and images sent by a participant. The X-axis denotes time of deployment.
Fig. 7.
Fig. 7. What do people do with BlockBot?
Fig. 8.
Fig. 8. What kinds of attributions do people make to BlockBots?
Fig. 9.
Fig. 9. Attributions by participants that take the BlockBot outside versus those who do not.
Fig. 10.
Fig. 10. Attributions by participants that give a name to BlockBot versus those who do not.
Fig. 11.
Fig. 11. Attributions for participants that attribute agency to the BlockBot versus those who do not.
Fig. 12.
Fig. 12. What is the BlockBot portrayed with?
However, this divide seems to be a highly individual and dynamic process. For example, while some people readily attribute moods and a certain persona to their car, others do not have this tendency whatsoever. For some, a car is nothing more than an object, while others grow attached to their cars, give them an identity and attribute emotions to them. Therefore, when considering a difference between robot and object attachment, it is perhaps more useful not to make a distinction between objects, machines, and robots, but rather between those entities that pass a social threshold and those that don’t solicit this perception. In other words: those objects that people ‘subjectify’ and those objects that people ‘objectify’.
This raises the question: does a mechanical artifact need to possess certain qualities to earn the moniker of a ‘robot’ or is it sufficient that a human merely believes or performs that it has those qualities? One could argue that if a human believes an entity has the capacities of a robot, it is irrelevant, when studying their engagements, that the entity they engage with actually possesses them. As Coeckelbergh puts it: “the ‘content’ that counts here is not what is ‘in the mind’ of the robot, but what humans feel when they interact with the robot” [15]. In that sense, a future line of research could also take characteristics of the hosts into account, alongside robot (appearance, behavior) and system (common locus) characteristics.

7 Conclusion

Science fiction has provided us with an image of a future cohabited by humans and robots. While robots are far from the autonomous entities they are portrayed to be in various media, robotic technologies are increasingly present in our daily lives and the spaces we frequent and inhabit, forming bonds and attachments with humans. The presence of robots in a human-inhabited environment provides an exciting new frontier for the research of human-robot interaction and relationships in the wild. This paper aims to contribute to this novel body of research. Specifically, the aim of this study is to introduce the concept of the common locus as a relevant variable for robots to pass a ‘social threshold’, after which they are perceived not as a machine, but as a social agent. We theorized that, aside from its realism in appearance and behavior, the presence of a robot in close proximity to a person’s daily life, space and experiences could be a relevant factor for a robot to pass this social threshold. This may inform robot interaction research and design, both for situations when bonding is desired versus when it should be avoided.
We designed a robotic artifact called the BlockBot that does not make a strong claim of realism of appearance or behavior but is aimed to facilitate a common locus with a participant. We released four BlockBots into ‘the field’ to be hosted and passed on by people in an unsupervised, open-ended experiment to study what kind of human engagement would arise based on the robotic artifact’s presence in the participants’ domestic setting. We theorized that, regardless of its lack of realism in appearance or behavior, the BlockBot would still be able to pass a social threshold above which participants would perceive it as a social agent.
Although initial data is limited and of qualitative nature, the results indicate that BlockBot successfully establishes a bond as well as a common locus with participants who are prone to engage with the BlockBot. Participants keep the robotic artifact close to them and anthropomorphize it. The BlockBot evokes identity and mind attributions and is frequently taken on social activities outside of the home, even long trips. These observed engagements could suggest that the BlockBot has passed a social threshold whereby they consider the BlockBot- to a certain degree - a social actor. Given the fact that we minimized its realism in appearance and behavior, the results could suggest that its common locus was a relevant factor in passing the social threshold. However, we also identified and discussed several alternative explanations that might have influenced the observed engagements: a framing effect, a novelty effect, its appearance and object attachment.
The concept of common locus implies that the presence of robots and robotic artifacts in our daily lives will impact how we perceive these robots. It is important to consider this dynamic because these attributions can be harmless and even useful in cases where bonding is desirable. However, if the perception of a robot as a social agent and any possible social engagement is undesirable, the common locus implies that certain robots should be kept away and out of sight when not used. Merely removing anthropomorphic qualities is potentially not sufficient.
Two BlockBots are still out there couch surfing and operating as figurative ‘cultural probes’ into human-inhabited environments [33]. We are excited to follow their journey, see what data they will generate, where they will go, what activities they will be part of and what attributions will be made along the way. Our in-the-field approach turned out to be an exciting methodology for studying human-robot interaction. We have received an interesting variety of messages, photos, videos, and feedback. The positive reactions to the BlockBot point to the strength of its design and its viability as a research tool for human-robot interaction. Since the BlockBot can be designed with different appearances or behaviors, future studies could use the BlockBot not only to study the relevance of the common locus but as a research platform in a wide variety of other human-robot relationship studies in human-inhabited environments.

References

[1]
Patrícia Alves-Oliveira, Patrícia Arriaga, Matthew A. Cronin, and Ana Paiva. 2020. Creativity encounters between children and robots. In 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 379–388.
[2]
Patrícia Alves-Oliveira, Pedro Sequeira, Francisco S. Melo, Ginevra Castellano, and Ana Paiva. 2019. Empathic robot for group learning: A field study. ACM Transactions on Human-Robot Interaction (THRI) 8, 1 (2019), 1–34.
[3]
Arjun Appadurai. 1988. The Social Life of Things: Commodities in Cultural Perspective. Cambridge University Press.
[4]
Christoph Bartneck, Michel Van Der Hoek, Omar Mubin, and Abdullah Al Mahmud. 2007. “Daisy, Daisy, give me your answer do!” switching off a robot. In 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 217–222.
[5]
Russell W. Belk. 1988. Possessions and the extended self. Journal of Consumer Research 15, 2 (1988), 139–168.
[6]
Laura E. Berk, Trisha D. Mann, and Amy T. Ogan. 2006. Make-believe play: Wellspring for development of self-regulation. In Play = Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth. Oxford University Press. https://academic.oup.com/book/0/chapter/156539699/chapter-ag-pdf/44948077/book_9542_section_156539699.ag.pdf.
[7]
Lasse Blond. 2019. Studying robots outside the lab: HRI as ethnography. Paladyn, Journal of Behavioral Robotics 10, 1 (2019), 117–127.
[8]
Valentino Braitenberg. 1986. Vehicles: Experiments in Synthetic Psychology. MIT Press.
[9]
Ashley Carman. 2019. They welcomed a robot into their family, now they’re mourning its death. The Verge (Jun.2019). https://www.theverge.com/2019/6/19/18682780/jibo-death-server-update-social-robot-mourning.
[10]
Julie Carpenter. 2013. The Quiet Professional: An Investigation of US Military Explosive Ordnance Disposal Personnel Interactions with Everyday Field Robots. Ph. D. Dissertation. University of Washington.
[11]
Elizabeth J. Carter, Samantha Reig, Xiang Zhi Tan, Gierad Laput, Stephanie Rosenthal, and Aaron Steinfeld. 2020. Death of a robot: Social media reactions and language usage when a robot stops operating. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 589–597.
[12]
Henning Christiansen, Anja M\(\phi\)lle Lindelof, and Mads Hobye. 2018. Breathing life into familiar domestic objects. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 589–594.
[13]
Bohkyung Chun and Heather Knight. 2020. The robot makers: An ethnography of anthropomorphism at a robotics company. ACM Transactions on Human-Robot Interaction (THRI) 9, 3 (2020), 1–36.
[14]
Mark Coeckelbergh. 2010. Health care, capabilities, and AI assistive technologies. Ethical Theory and Moral Practice 13, 2 (2010), 181–190.
[15]
Mark Coeckelbergh. 2011. Artificial companions: Empathy and vulnerability mirroring in human-robot relations. Studies in Ethics, Law, and Technology 4, 3 (2011).
[16]
Mark Coeckelbergh. 2011. Humans, animals, and robots: A phenomenological approach to human-robot relations. International Journal of Social Robotics 3, 2 (2011), 197–204.
[17]
Mark Coeckelbergh. 2021. Three responses to anthropomorphism in social robotics: Towards a critical, relational, and hermeneutic approach. International Journal of Social Robotics (2021).
[18]
Mark Coeckelbergh, Cristina Pop, Ramona Simut, Andreea Peca, Sebastian Pintea, Daniel David, and Bram Vanderborght. 2016. A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: Ethical acceptability, trust, sociability, appearance, and attachment. Science and Engineering Ethics 22, 1 (2016), 47–65.
[19]
Luisa Damiano and Paul Dumouchel. 2018. Anthropomorphism in human–robot co-evolution. Frontiers in Psychology 9 (2018), 468.
[20]
Kate Darling. 2015. ‘Who’s Johnny?’ Anthropomorphic framing in human-robot interaction, integration, and policy. Robot Ethics 2 (2015).
[21]
Kate Darling, Palash Nandy, and Cynthia Breazeal. 2015. Empathic concern and the effect of stories in human-robot interaction. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 770–775.
[22]
Alwin de Rooij, Joost Broekens, and Maarten H. Lamers. 2013. Abstract expressions of affect. International Journal of Synthetic Emotions (IJSE) 4, 1 (2013), 1–31.
[23]
Daniel Clement Dennett. 1989. The Intentional Stance. MIT Press.
[24]
Jérôme Dinet and Robin Vivian. 2015. Perception et attitudes à l’égard des robots anthropomorphes en France: validation d’une échelle d’attitudes. Psychologie Française 60, 2 (2015), 173–189.
[25]
Brian R. Duffy and Karolina Zawieska. 2012. Suspension of disbelief in social robotics. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 484–489.
[26]
Nicholas Epley, Adam Waytz, Scott Akalis, and John T. Cacioppo. 2008. When we need a human: Motivational determinants of anthropomorphism. Social Cognition 26, 2 (2008), 143–155.
[27]
Julia Fink. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In International Conference on Social Robotics. Springer, 199–208.
[28]
Richard Fischer. 2013. Is it OK to torture or murder a robot? BBC Future (Nov.2013). https://www.bbc.com/future/article/20131127-would-you-murder-a-robot.
[29]
Jodi Forlizzi. 2007. How robotic products become social products: An ethnographic study of cleaning in the home. In 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 129–136.
[30]
Jodi Forlizzi and Carl DiSalvo. 2006. Service robots in the domestic environment: A study of the Roomba vacuum in the home. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction. 258–265.
[31]
Megan Garber. 2013. Funerals for fallen robots. The Atlantic (Sep.2013). https://www.theatlantic.com/technology/archive/2013/09/funerals-for-fallen-robots/279861/.
[32]
Norina Gasteiger, Ho Seok Ahn, Christine Fok, JongYoon Lim, Christopher Lee, Bruce A. MacDonald, Geon Ha Kim, and Elizabeth Broadbent. 2021. Older adults’ experiences and perceptions of living with Bomy, an assistive dailycare robot: A qualitative study. Assistive Technology (2021), 1–11.
[33]
W. Gaver, A. Dunne, and E. Pacenti. 1999. Design: Cultural probes. Interactions 6, 1 (1999), 21.
[34]
Leonardo Giusti and Patrizia Marti. 2006. Interpretative dynamics in human robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 111–116.
[35]
Katie Glaskin. 2012. Empathy and the robot: A neuroanthropological analysis. Annals of Anthropological Practice 36, 1 (2012), 68–87.
[36]
Dorothy G. Singer Roberta M. Golinkoff and Kathy Hirsh-Pasek. 2006. Play = Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth. Oxford University Press.
[37]
Barbara Gonsior, Stefan Sosnowski, Malte Buß, Dirk Wollherr, and Kolja Kühnlenz. 2012. An emotional adaption approach to increase helpfulness towards a robot. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2429–2436.
[38]
Barbara Gonsior, Stefan Sosnowski, Christoph Mayer, Jürgen Blume, Bernd Radig, Dirk Wollherr, and Kolja Kühnlenz. 2011. Improving aspects of empathy and subjective performance for HRI through mirroring facial expressions. In 2011 RO-MAN. IEEE, 350–356.
[39]
Shelleen M. Greene. 2016. Bina48: Gender, race, and queer artificial life. Ada: A Journal of Gender, New Media & Technology 9 (2016).
[40]
Andrew Griffin. 2019. The heartbreaking final message NASA sent to its Mars rover. The Independent (Feb.2019). https://www.independent.co.uk/life-style/gadgets-and-tech/news/nasa-mars-opportunity-rover-final-message-billie-holiday-song-dead-a8784266.html.
[41]
Horst-Michael Gross, Andrea Scheidig, Steffen Müller, Benjamin Schütz, Christa Fricke, and Sibylle Meyer. 2019. Living with a mobile companion robot in your own apartment-final implementation and results of a 20-weeks field study with 20 seniors. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2253–2259.
[42]
Erico Guizzo. 2010. The man who made a copy of himself. IEEE Spectrum 47, 4 (2010), 44–56.
[43]
Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. The American Journal of Psychology 57, 2 (1944), 243–259.
[44]
Don Ihde. 1990. Technology and the Lifeworld: From Garden to Earth. Indiana University Press.
[45]
Mattias Jacobsson. 2009. Play, belief and stories about robots: A case study of a Pleo blogging community. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 232–237.
[46]
Björn Jensen, Nicola Tomatis, Laetitia Mayor, Andrzej Drygajlo, and Roland Siegwart. 2005. Robots meet humans-interaction in public spaces. IEEE Transactions on Industrial Electronics 52, 6 (2005), 1530–1546.
[47]
Malte Jung and Pamela Hinds. 2018. Robots in the wild: A time for more robust theories of human-robot interaction. ACM Transactions on Human-Robot Interaction (2018).
[48]
Peter H. Kahn Jr., Batya Friedman, and Jennifer Hagman. 2002. “I care about him as a pal”: Conceptions of robotic pets in online AIBO discussion forums. In CHI’02 Extended Abstracts on Human Factors in Computing Systems. 632–633.
[49]
Despina Kakoudaki. 2007. Studying robots, between science and the humanities. International Journal of the Humanities 5, 8 (2007).
[50]
Takayuki Kanda, Takayuki Hirano, Daniel Eaton, and Hiroshi Ishiguro. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human–Computer Interaction 19, 1-2 (2004), 61–84.
[51]
Frédéric Kaplan. 2000. Free creatures: The role of uselessness in the design of artificial pets. In Proceedings of the 1st Edutainment Robotics Workshop. Citeseer, 45–47.
[52]
Kacie Kinzer. 2009. Tweenbots by Kacie Kinzer. http://www.tweenbots.com/. Online; accessed on 23-June-2020.
[53]
Sonya S. Kwak, Jun San Kim, and Jung Ju Choi. 2017. The effects of organism-versus object-based robot design approaches on the consumer acceptance of domestic robots. International Journal of Social Robotics 9, 3 (2017), 359–377.
[54]
Maarten H. Lamers, Fons J. Verbeek, and Peter W. H. van der Putten. 2013. Tinkering in scientific education. In Advances in Computer Entertainment, Dennis Reidsma, Haruhiro Katayose, and Anton Nijholt (Eds.). Springer International Publishing, 568–571.
[55]
Iolanda Leite, André Pereira, Samuel Mascarenhas, Carlos Martinho, Rui Prada, and Ana Paiva. 2013. The influence of empathy in human–robot relations. International Journal of Human-Computer Studies 71, 3 (2013), 250–260.
[56]
Séverin Lemaignan, Julia Fink, and Pierre Dillenbourg. 2014. The dynamics of anthropomorphism in robotics. In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 226–227.
[57]
Deborah J. Leong and Elena Bodrova. 2012. Make-believe play. Young Children 29 (2012), 28–34.
[58]
Florent Levillain and Elisabetta Zibetti. 2017. Behavioral objects: The rise of the evocative machines. Journal of Human-Robot Interaction 6, 1 (2017), 4–24.
[59]
Dingjun Li, P. L. Patrick Rau, and Ye Li. 2010. A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics 2, 2 (2010), 175–186.
[60]
Carole J. Litt. 1986. Theories of transitional object attachment: An overview. International Journal of Behavioral Development 9, 3 (1986), 383–399.
[61]
Maria Luce Lupetti, Yuan Yao, Haipeng Mi, and Claudio Germak. 2017. Design for children’s playful learning with robots. Future Internet 9, 3 (2017), 52.
[62]
Amy Marcott. 2011. Experimenting with furbies. MIT Alumni Association (Sep.2011). https://alum.mit.edu/slice/experimenting-furbies.
[63]
Patrizia Marti, Alessandro Pollini, Alessia Rullo, and Takanori Shibata. 2005. Engaging with artificial pets. In ACM International Conference Proceeding Series, Vol. 132. 99–106.
[64]
Maja J. Matarić. 2007. The Robotics Primer. MIT Press.
[65]
Nikolaos Mavridis and Emmet Cole. [n. d.]. http://www.dr-nikolaos-mavridis.com/resources/WhenRobotsDie_NikolaosMavridis.pdf. Online; accessed 01-Oct-2020.
[66]
Nikolaos Mavridis, Chandan Datta, Shervin Emami, Andry Tanoto, Chiraz BenAbdelkader, and Tamer Rabie. 2009. FaceBots: Robots utilizing and publishing social information in Facebook. In 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 273–274.
[67]
Nikolaos Mavridis, Michael Petychakis, Alexandros Tsamakos, Panos Toulis, Shervin Emami, Wajahat Kazmi, Chandan Datta, Chiraz BenAbdelkader, and Andry Tanoto. 2010. FaceBots: Steps towards enhanced long-term human-robot interaction by utilizing and publishing online social information. Paladyn, Journal of Behavioral Robotics 1, 3 (2010), 169–178.
[68]
Jane McGonigal. 2003. A real little game: The performance of belief in pervasive play. Proceedings of DiGRA 2003 (2003).
[69]
Joseph E. Michaelis and Bilge Mutlu. 2017. Someone to read with: Design of and experiences with an in-home learning companion robot for reading. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 301–312.
[70]
Masahiro Mori, Karl F. MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robotics & Automation Magazine 19, 2 (2012), 98–100.
[71]
Clifford Nass, Jonathan Steuer, and Ellen R. Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 72–78.
[72]
Linda Onnasch and Eileen Roesler. 2019. Anthropomorphizing robots: The effect of framing in human-robot collaboration. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63. SAGE Publications Sage CA: Los Angeles, CA, 1311–1315.
[73]
Yadong Pan, Haruka Okada, Toshiaki Uchiyama, and Kenji Suzuki. 2013. Direct and indirect social robot interactions in a hotel public space. In 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 1881–1886.
[74]
Yadong Pan, Haruka Okada, Toshiaki Uchiyama, and Kenji Suzuki. 2013. Listening to vs overhearing robots in a hotel public space. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 205–206.
[75]
Phoebe Parke. 2015. Is it cruel to kick a robot dog? CNN (Feb.2015). https://edition.cnn.com/2015/02/13/tech/spot-robot-dog-google/index.html.
[76]
Anthony D. Pellegrini et al. 2009. The Role of Play in Human Development. Oxford University Press, USA.
[77]
André Pereira, Iolanda Leite, Samuel Mascarenhas, Carlos Martinho, and Ana Paiva. 2010. Using empathy to improve human-robot relationships. In International Conference on Human-Robot Personal Relationship. Springer, 130–138.
[78]
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle. 2018. What is human-like? Decomposing robots’ human-like appearance using the anthropomorphic roBOT (ABOT) database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 105–113.
[79]
Byron Reeves and Clifford Ivar Nass. 1996. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
[80]
François Rey, Michele Leidi, and Francesco Mondada. 2009. Interactive mobile robotic drinking glasses. In Distributed Autonomous Robotic Systems 8. Springer, 543–551.
[81]
Astrid M. Rosenthal-von der Pütten, Frank P. Schulte, Sabrina C. Eimler, Laura Hoffmann, Sabrina Sobieraj, Stefan Maderwald, Nicole C. Krämer, and Matthias Brand. 2013. Neural correlates of empathy towards robots. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 215–216.
[82]
Marco C. Rozendaal, Boudewijn Boon, and Victor Kaptelinin. 2019. Objects with intent: Designing everyday things as collaborative partners. ACM Transactions on Computer-Human Interaction (TOCHI) 26, 4 (2019), 1–33.
[83]
Matthew Rueben, Eitan Rothberg, and Maja J. Matarić. 2020. Applying the theory of make-believe to human-robot interaction. In Culturally Sustainable Social Robotics. IOS Press, 40–50.
[84]
Selma Sabanovic, Marek P. Michalowski, and Reid Simmons. 2006. Robots in the wild: Observing human-robot social interaction outside the lab. In 9th IEEE International Workshop on Advanced Motion Control, 2006. IEEE, 596–601.
[85]
Donna Schreuter, Peter van der Putten, and Maarten H. Lamers. 2021. Trust me on this one: Conforming to conversational assistants. Minds and Machines 31, 4 (2021), 535–562.
[86]
Takanori Shibata, Kazuyoshi Wada, Yousuke Ikeda, and Selma Sabanovic. 2009. Cross-cultural studies on subjective evaluation of a seal robot. Advanced Robotics 23, 4 (2009), 443–458.
[87]
David Harris Smith and Frauke Zeller. 2017. The death and lives of hitchBOT: The design and implementation of a hitchhiking robot. Leonardo 50, 1 (2017), 77–78. arXiv:
[88]
Dimitris Spiliotopoulos, Ion Androutsopoulos, and Constantine D. Spyropoulos. 2001. Human-robot interaction based on spoken natural language dialogue. In Proceedings of the European Workshop on Service and Humanoid Robots. 25–27.
[89]
Ja-Young Sung, Lan Guo, Rebecca E. Grinter, and Henrik I. Christensen. 2007. “My Roomba is Rambo” Intimate home appliances. In International Conference on Ubiquitous Computing. Springer, 145–162.
[90]
Yutaka Suzuki, Lisa Galli, Ayaka Ikeda, Shoji Itakura, and Michiteru Kitazaki. 2015. Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports 5 (2015), 15924.
[91]
Dag Sverre Syrdal, Kerstin Dautenhahn, Ben Robins, Efstathia Karakosta, and Nan Cannon Jones. 2020. Kaspar in the wild: Experiences from deploying a small humanoid robot in a nursery school for children with autism. Paladyn, Journal of Behavioral Robotics 11, 1 (2020), 301–326.
[92]
Jessica M. Szczuka and Nicole C. Krämer. 2017. Not only the lonely-how men explicitly and implicitly evaluate the attractiveness of sex robots in comparison to the attractiveness of women, and personal characteristics influencing this evaluation. Multimodal Technologies and Interaction 1, 1 (2017), 3.
[93]
Redazione Tecnoscienza. 2011. The Beggar Robot by Sašo Sedlaček. TECNOSCIENZA: Italian Journal of Science & Technology Studies 2, 1 (2011).
[94]
Gabriele Trovato, Massimiliano Zecca, Salvatore Sessa, Lorenzo Jamone, Jaap Ham, Kenji Hashimoto, and Atsuo Takanishi. 2013. Cross-cultural study on human-robot greeting interaction: Acceptance and discomfort by Egyptians and Japanese. Paladyn, Journal of Behavioral Robotics 4, 2 (2013), 83–93.
[95]
Sherry Turkle. 2007. Authenticity in the age of digital companions. Interaction Studies 8, 3 (2007), 501–517.
[96]
Jordi Vallverdú, Toyoaki Nishida, Yoshisama Ohmoto, Stuart Moran, and Sarah Lázare. 2018. Fake empathy and human-robot interaction (HRI): A preliminary study. International Journal of Technology and Human Interaction (IJTHI) 14, 1 (2018), 44–59.
[97]
Peter van der Putten and Maarten Lamers. [n. d.]. Bots Like You. https://sites.google.com/view/botslikeyou. Online; accessed 19-July-2020.
[98]
Sebastian Wallkötter, Rebecca Stower, Arvid Kappas, and Ginevra Castellano. 2020. A robot by any other frame: Framing and behaviour influence mind perception in virtual but not real-world environments. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 609–618.
[99]
Astrid Weiss, Daniela Wurhofer, and Manfred Tscheligi. 2009. “I love this dog”-children’s emotional attachment to the robotic dog AIBO. International Journal of Social Robotics 1, 3 (2009), 243–248.
[100]
Jacqueline M. Kory Westlund, Marayna Martinez, Maryam Archie, Madhurima Das, and Cynthia Breazeal. 2016. Effects of framing a robot as a social agent or as a machine on children’s social behavior. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 688–693.
[101]
Jacqueline M. Kory Westlund, Marayna Martinez, Maryam Archie, Madhurima Das, and Cynthia Breazeal. 2016. A study to measure the effect of framing a robot as a social agent or as a machine on children’s social behavior. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 459–460.
[102]
Julia Whalen. 2017. ‘Goodbye, old friend’: CBC sends off mail robots | CBC News. CBCnews (Sep.2017). https://www.cbc.ca/news/canada/toronto/goodbye-mailbots-party-1.4313207.
[103]
Sangseok You and Lionel Robert. 2017. Emotional attachment, performance, and viability in teams collaborating with embodied physical action (EPA) robots. You, S. and Robert, LP (2018). Emotional Attachment, Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots, Journal of the Association for Information Systems 19, 5 (2017), 377–407.
[104]
Andrea Zeffiro. 2016. Post-hitchBOT-ism. Wi: Journal of Mobile Media (Jan.2016). http://wi.mobilities.ca/post-hitchbot-ism/.
[105]
Aleksandar Zivanovic. 2005. The development of a cybernetic sculptor: Edward Ihnatowicz and the Senster. In Proceedings of the 5th Conference on Creativity & Cognition. 102–108.
[106]
Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: Opportunities and challenges in human–robot interaction. International Journal of Social Robotics 7, 3 (2015), 347–360.
[107]
Jakub A. Złotowski, Astrid Weiss, and Manfred Tscheligi. 2012. Navigating in public space: Participants’ evaluation of a robot’s approach behavior. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction. 283–284.

Cited By

View all
  • (2024)SnuggleBot the Companion: Exploring In-Home Robot Interaction Strategies to Support Coping With LonelinessProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660702(2972-2986)Online publication date: 1-Jul-2024
  • (2023)Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AIMinds and Machines10.1007/s11023-023-09628-y33:1(55-82)Online publication date: 26-Feb-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 12, Issue 1
March 2023
454 pages
EISSN:2573-9522
DOI:10.1145/3572831
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 February 2023
Online AM: 19 September 2022
Accepted: 19 August 2022
Revised: 17 April 2022
Received: 06 April 2021
Published in THRI Volume 12, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Human-robot interaction
  2. human-robot relationships
  3. bonding
  4. in-the-wild study
  5. qualitative study
  6. common locus
  7. abstract robots

Qualifiers

  • Research-article
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,407
  • Downloads (Last 6 weeks)158
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)SnuggleBot the Companion: Exploring In-Home Robot Interaction Strategies to Support Coping With LonelinessProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660702(2972-2986)Online publication date: 1-Jul-2024
  • (2023)Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AIMinds and Machines10.1007/s11023-023-09628-y33:1(55-82)Online publication date: 26-Feb-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media