1 Introduction
Robots are entering human-inhabited environments in growing numbers. However, we lack robust theories on how people make sense of these new robotic co-inhabitants and collaborators [
84,
89]. Jung and Hinds note that research on human-robot interaction has been ‘dominated by laboratory studies, largely examining a single human interacting with a single robot’ - causing a disconnect between these studies and the socially complex environments robots are aimed to be - and exceedingly are - placed in [
47]. Several authors have therefore argued to supplement controlled laboratory research with field studies on human-robot interaction in the real world [
7,
84,
91].
Recently, several such studies have been conducted to increase our understanding of human-robot interactions ‘in the wild’, for example in care homes [
32], school classrooms [
2], and in homes with children [
69] and the elderly [
41]. The humanoid robot Kaspar spent over a year in a special nursery for autistic children [
91]. Furthermore, robot field studies have studied how humans in public space react to robots that approach them [
107], listen to them [
74], greet them in hotel lobbies [
73], or give tours in museums [
46].
Levillain and Zibetti claim that strong realism in either human or animal-like appearance, or autonomous movement and interactive behavior, allows a robot to reach a ‘social threshold’, where humans experience its presence as that of another social agent and are disposed to socially interact with the machine [
58]. While there is ample laboratory-based evidence that appearance and behavior are important factors in establishing a robot as a social agent, there is a growing body of cases ‘in-the-wild’, in which humans show behavior that indicates a strong social engagement towards robots that do not possess a life-like appearance nor show life-like movement or behavior [
1,
10,
11,
31,
40,
61,
102]. How can we explain these cases?
One common element that we observe that unites these cases is that such robots typically reside in close proximity to humans, in a shared space, often for prolonged periods of time. We hypothesize that this act of living or working with a robot for an extended period of time can facilitate an experience of a robot as a social agent. Sharing space and time leads to shared experiences, which are further reinforced by increased attribution and projection of lifelike qualities such as identity, emotions, desires, and intentions. This ultimately leads to increased human-robot bonding, even if the robot is not lifelike in appearance or behavior. We call this principle the common locus: the sharing of time, space, and experiences with a robot causing the human to bond with the robot.
As an initial playful, qualitative exploration of this under-explored principle, we designed and deployed a series of original robotic artifacts called BlockBots. The BlockBots are small, abstract, minimal cube-shaped robots with minimal human-like appearance and behavior, aiming to maximize a common locus with humans (Figure
1). Our methodology to generate interaction with the BlockBot is inspired by hitchBOT: a robot that hitchhiked across Canada, the U.S., and several European countries [
87]. However, we have taken an even more open-ended approach to study how humans interact and bond with these robots in the wild, by presenting BlockBots as autonomous creatures that want to meet people and travel the world, and letting them ‘couchsurf’ from owner to owner. The goal of this study is not to present generalizable knowledge on human-robot interaction or substantive proof for the common locus, but to identify a potentially relevant variable through a novel design methodology which can be further explored and scrutinized in future research.
Since the concept of common locus emphasizes the experience of a shared life within a human-inhabited environment, we consider it appropriate to use this open ended, ‘in the wild’, field-study-based approach. Participants can keep the BlockBot as long as they wish, and then pass it on to whomever they want for the next step on their journey. The BlockBots are thus hosted by an autonomously increasing chain of participants, in a natural setting, without set goals or supervision by the experimenters. Participants have the possibility to send messages to a phone number attached to the back of the box, to let the so-called parent of the bot know how it is doing. This qualitative data has been analyzed to study how participants engage with the robotic artifact and whether the robotic artifact is perceived as a social agent. Regardless of its minimal design and lack of behavior, we observe that participants are disposed to socially interact with the BlockBot.
Note that this approach is different from hitchBOT’s experimental set up, which was intentionally less open-ended as hitchBOT was given a target destination and an activity bucket list, and information on its travels was shared on Instagram and Facebook. BlockBot hosts were free to do what they wanted with it, including ignoring it, and in principle did not know they were taking part in an experiment. Also, apart from a smiley face, BlockBots are abstract in appearance and very minimal in their behavior: only ‘awake’ or ‘asleep’ while charging.
The remainder of this paper is structured as follows. Section
2 reviews related work on human-robot bonding. In Section
3 we introduce the concept of common locus. Section
4 discusses our methodology, the design and deployment of the BlockBot. We then present the results of our field experiment (Section
5), followed by a discussion of the results, known limitations of our study, alternative explanations for the results, and suggestions for potential future research directions (Section
6). Section
7 concludes the paper.
2 Background
In this section, we first reflect on the definition of a robot, and then discuss anthropomorphism, human-robot bonding and the mechanisms of appearance and behavior that allow a robot to pass the social threshold.
2.1 What is a Robot?
There is no universally agreed upon definition of a robot. Most often, a robot is defined along its technical capabilities: its capacity to autonomously sense and act in the physical world and achieve goals [
64]. Most of the definitions can be argued to face demarcation problems. For example, the former definition includes examples of machines that are rarely referred to as robots, such as a vending machine, and excludes remote-controlled vehicles such as drones or bomb disposal robots, since their tele-operated nature makes them not ‘autonomous’.
While the definition of what makes a robot is not the main topic of this paper, we would like to argue here that it is beneficial in the context of human-robot interaction to think about robots, not along mechanical and ontological lines, but along the lines of how they appear (in the broadest sense of the definition) to humans, as Coeckelbergh proposes [
16], or whether and what we project on them, in the spirit of Dennett’s intentional stance [
23]. Don Ihde has also argued that our relation to robots can be understood as an alterity relation, as relating to technology as a (quasi-) other [
44]. More in terms of media in general, Nass and Reeves state that people tend to interact with media as if they interact with other persons [
79].
Therefore, when considering a definition of robots, it is perhaps more useful not to make a technical distinction between objects, machines, and robots, based on physical properties of the object or the mechanical nature of the interaction, but rather based on the relation humans have with the robotic technology in question. One could make a distinction between robots, machines or artifacts that are ‘subjectified’ - objects onto which humans project features such as identity, state of mind, emotions, motivations - and objects that are not: machines that appear to people as ‘subjects’ versus machines that appear to people as ‘objects’.
This distinction is not unwarranted. Nass and colleagues already showed in 1994 that humans treat computers as social actors [
71]. It seems that robots inhabit a new ambiguous ontological category, separate from machines in general. Damiano and Dumouchel note that people across age groups perceive robots as ‘ambiguous objects’ in relation to traditional ontological categories [
19]. Robots seem to inhabit a place on a spectrum between alive and not alive, sentient and not sentient, intelligent and not intelligent [
48]. Defining some robots as ‘subjectified’ artifacts could, therefore, be argued to be more in line with their perceived ambiguous character.
This means that there could be many subjectified objects, let us call them artificial creatures. Most robots, specifically the ones that we interact with or would want to develop a relationship with, such as social robots, could be seen as artificial creatures. This also means that research in artificial creatures is relevant for human robot interaction research, as many robots are examples of artificial creatures, at least to some degree. There are also artificial creatures, let us say a teddy bear, that do not fit the classical definition of a robot. Likewise, there could be robots by the classical definition which are not subjectified objects. For example, a milking robot or surgical robot could be a mere tool and object to someone, while it may be different to a farmer or a surgeon. This is not problematic, because, as discussed, we do not define the artificial creature in isolation, but rather in a relationship to a human. Note that there could also be objects that we develop an attachment to, but that we do not subjectify. An example might be a watch that I inherited from my grandfather. These are not the types of objects that fall within the scope of our study.
For the purposes of this paper, in order to mitigate the debate of defining what a robot is, we understand the common definition of a robot as a mechanical artifact or system in the physical world that possess a certain level of autonomy, agency, and sensory capacities. Hence, we have refrained from calling BlockBot a robot. Instead, we have called it a robotic artifact. This definition is meant to refer to the fact that the object has certain qualities that could potentially evoke a connotation of a robot but might not qualify as a robot according to classical definitions [
64]. But, given our focus on artificial creatures, one can imagine it will be interesting to research robots, robotic artifacts and creatures that actually display minimal human appearance and behavior, to get a better grasp of what else could induce this kind of projection in humans. From that perspective, it is less important to us that the artifact looks like a human, behaves likes a human, or as a matter of fact, looks or behaves like a classical robot - we are rather looking for the opposite that would still pass the social threshold.
2.2 Anthropomorphism
People tend to project human-like qualities or traits such as emotion, agency, or intention on artifacts. This tendency is known as anthropomorphism [
58,
106]. Anthropomorphism has largely been viewed as a cognitive bias in other fields of study [
26,
58]. However, the concept generally fulfills a positive and central role in the fields of human-robot interaction and social robotics and is theorized to lead to a natural and intuitive interaction between humans and robots [
19,
27,
35].
In cognitive science and philosophy of mind, thinkers such as Dennett do not see anthropomorphism as a human bug, but as a feature. He argues that taking an intentional stance, i.e., projecting beliefs, desires and intentions on the other is an ecologically useful and computationally economical way to understand and predict behavior of ‘the other’ [
23]. This does not mean that anthropomorphism is necessarily desirable. For example, Coeckelbergh has outlined several normative responses to anthropomorphism [
17]. For instance, it matters whether the ‘other’, say a robot, was designed to benefit the humans it is interacting with in the first place, or more its designers.
In line with the tendency to project human-like qualities, people have been observed to show rich emotional and social behavior when interacting with robots, for example empathy. A study using
electroencephalography (EEG) found that humans are able to empathize with robot pain [
90]. Another study using fMRI showed that violent interaction towards humans or robots resulted in similar neural activity compared to an inanimate object, which indicates that humans and robots elicit similar emotional reactions [
81]. Similarly, other studies have showed that people were less likely to turn off an agreeable and intelligent machine when it begs for its life compared to one that is not agreeable nor intelligent [
4].
There is also ample anecdotal evidence from non-scientific sources regarding humans showing empathy for mechanical artifacts. In an experiment covered in an episode of the radio show Radiolab, participants were more uncomfortable holding a Furby upside down that started to cry than a Barbie doll [
62]. In a workshop, participants showed strong reservation about hitting a dinosaur-like robotic toy called Pleo that they had just spent an hour playing with [
28]. When the social domestic robot Jibo was discontinued, media outlets reported on people that were mourning its ‘passing’; with some parents having to explain to their children that ‘Jibo was not going to be around anymore’ [
9]. After Boston Dynamics showcased the balancing capabilities of their quadruped robot Spot - which showed life-like movement - by kicking it in an online video, there was online outcry over the supposed cruelty displayed [
75].
It is commonly claimed that strong realism in either human-like appearance or autonomous movement or behavior “allows a robot to reach the ‘social threshold’, where humans experience its presence as that of another social agent and are disposed to socially interact with the machine.” [
58]. The balance between these factors is hypothesized to be asymmetrical: behavior seems to be a stronger factor for a robot to pass the social threshold than its appearance [
58]. We discuss these categories briefly in the next part of this section.
2.3 Appearance
Appearance-based strategies can be clustered into three categories: abstract, animal-like robots, and human-like robots. A robot’s abstract appearance in itself does not function as a social cue, instead such robots often use a behavioral approach to solicit social engagement (for further reading see: [
22]).
There are conflicting studies about to which capacity the human form functions as a positive social cue for human-robot interaction. One study showed that humans react more empathically towards robots that are more human-like compared to non-human-like appearances [
96]. On the other hand, the human form can raise expectations about intelligence, intent, agency, and physical capabilities that contemporary robot technology might not be able to deliver on, which in turn might generate negative feelings [
70]. Regardless, there have been several projects developing robots with a highly realistic human appearance such as the android clone of Hiroshi Ishiguro [
42] or Bina48 developed by Hanson Robotics [
39]. Unsurprisingly, the appeal of sex robots is also largely tied to their human-like appearance [
92]. To map out the concept of human-like appearance, the ABOT database has constructed three distinct dimensions of human-like appearance - body-manipulators, face, and surface - based on a collection of 251 real-life robots [
78]. One can also generalize from visual appearance. For example Schreuter et al. demonstrated that people showed higher levels of conformance to a conversational assistant with a humanlike voice versus a text based assistant [
85].
Another appearance strategy explores animal-like forms. Research suggests that humans have a more positive attitude towards animal-like or toy-like robots than human-like robots [
24]. For example, the robot seal PARO stimulates feelings of attachment and engagement [
63] and children formed a very quick attachment to the robot dog AIBO [
99].
2.4 Behavior
Realistic behavior or movement is believed to be a stronger cue for social interaction than appearance [
58]. Even the simplest movement can give the impression that it is carried out with intent, is pursuant of goals and the result of some sort of intelligence, and can influence the level of empathy humans feel for a robot [
21]. One study showed that humans project intentions on simple geometric shapes that move around a screen [
43]. Braitenberg argued that even the simplest movement of small robotic creatures in reaction to their environment can lead to the attribution of complex behavior: movement towards a location seems indicative of interest and therefore curiosity, while movement away from a source seems to indicate fear or disgust [
8]. A good example of this is iRobot Roomba
TM - a non-distinct disc-shaped vacuum robot that moves around to clean floors, while avoiding obstacles. Studies in long-term human-robot interaction showed that people are eager to personify their vacuum robot with names and ascribe personal traits to the robot [
29,
30,
89].
Movement has also been a central strategy for imbuing artifacts with robotic life. Adding a reactive potential to non-anthropomorphic robotic artifacts reinforces the possibilities of their apparent behavior [
58]. These artifacts are known under several monikers: Objects with Intent [
82], familiar domestic object robots [
12], robjects [
80], the object-based robot design approach [
53], or abstract robotics [
22].
Interestingly, fallibility or helplessness in robots can also facilitate certain attributions. For example, when an AIBO robot dog trips while walking, it is not perceived as malfunctioning but endearing [
51]. The Tweenbot, a cardboard robot on wheels, was able to reach the other side of Central Park, New York while only being able to drive forward. Its destination was written on a flag it carried [
52]. The robot would continuously get stuck, but passers-by would pick it up and move it in the right direction.
Aside from motion, the ability to communicate emotion is also a strong catalyst for social engagements. A robot mimicking facial expressions strongly influences the level of human empathy [
38]. A robot that adapts its mood to the mood of a human, through facial and verbal expressions of a robot head, will also increase a feeling of helpfulness towards that robot [
37]. Robots that behave emphatically themselves are perceived as friendlier [
55,
77].
Lastly, the use of spoken or written language has also been a mechanism to establish a social interaction. In a domestic setting, disembodied chat-bots or monolithic home assistants such as Amazon Alexa rely on spoken language as a social cue. Robots that need to communicate frequently and clearly often rely on natural language to establish a social interaction [
88]. In one study, participants were hesitant to turn off a robot when it begged for its life [
4].
Artistic projects have also deployed behavior as a social cue to solicit attributions of curiosity or feelings of empathy (for an extensive collection of these kinds of robots see [
97]). The interactive robotic sculpture Senster created by artist Edward Ihnatowicz, moved its ‘head’ towards sources of relative loud sound and low levels of movement which caused exhibition guests to attribute curiosity and a more complex intelligence to Senster than actually programmed [
105]. The Beggar Bot used human language to successfully entice people to give money to it - even in the presence of actual human beggars [
93].
Strategies for designing robot sociability can thus be characterized as a spectrum between high or low appearance cues and/or high and low behavioral cues. In the next section, we will introduce a novel, additional variable: the common locus.
3 Common Locus
There is a growing number of examples where humans form a strong bond with - or at least engage socially with - robots that are explicitly machine-like in their appearance, express limited to no behavior, and have a primary non-social function. However, these robots have emerged as social agents nonetheless. For example, one study from the University of Washington that researched the interaction between US military Explosive Ordnance Disposal personnel and their bomb disposal robots found that soldiers make rich psychological attributions to these robots and form strong bonds [
10]. Interestingly, these robots are not capable of autonomous movement and look distinctly machine-like: often not more than four wheels with a robotic arm and a camera. Nonetheless, soldiers attribute a gender and nickname the robot and show behavior indicative of grief and sadness when one is lost in action [
10]. Similarly, a study researching what language was used on social media about the discontinuation of the NASA Mars rover Opportunity found that people “verbally mourn robots similar to living things” [
11]. This is surprising since Opportunity is a remote-controlled vehicle that was not designed to evoke social behavior.
We also find several examples of human bonding with robots that were not designed to solicit a social engagement in non-scientific sources such as media articles. A MARCBot, a type of bomb disposal robot which was nicknamed Boomer, was reportedly given a burial and gun salute after exploding on duty [
31]. When the NASA Mars Rover Opportunity was discontinued, the crew sent it a farewell song and the press statement reportedly ‘amounted to a funeral’ [
40]. The
Canadian Broadcast Company (CBC) reportedly threw a retirement party for their five bulky mail delivery robot colleagues which employees had named and attributed a personality to. During the retirement party, employees discussed shared experiences with and memories of the robots, such as one robot blocking the door while a presenter was late for a live broadcast [
102].
The aforementioned robots are explicitly machine-like in their appearance and express limited to no behavior. However, we observe such robots are situated in highly social environments such as places of work, the home and war zones which allows for frequent human-robot interaction. This indicates that a social setting can be an important factor for non-social robots to emerge as social agents, which is in line with research showing that people operating closely to robots ‘frequently use anthropomorphic language’ about those robots [
13].
This paper theorizes that the act of living or working in close proximity or cooperation with a robot for an extended period of time can facilitate an experience of a robot as a social agent, regardless of whether this robot possess low appearance and low behavioral social cues. We propose an additional variable for a robot to pass the social threshold to be perceived as a social agent: the concept of a common locus.
The common locus is a term introduced by Nikolaos Mavridis in an interview with Wired journalist Emmet Cole in the context of long-term human-robot interaction [
65]. In this interview, Mavridis hypothesizes that the “concept of ‘Sharing’, and more specifically building and maintaining a metaphorical ‘Common Locus’ ... forms the backbone of a meaningful and sustainable Human-Robot relation” [
65]. Mavridis argues that what unites two friends is all the shared elements between them - a Common Locus - which grows over time. According to Mavridis, this Common Locus is made up of: “shared memories (what they have lived together, and what they have experienced in common), their shared acquaintances and friends (given that we don’t live in isolation; but are deeply embedded within our social network), their shared interests and tastes; including also more fundamental shared elements, such as a shared language of communication” [
65]. This idea functioned as the basis for Mavridis and colleagues to develop a robot called Sarah the FaceBot that would exploit online published information, such as from Facebook, to create a pool of shared memories and shared friends [
66,
67]. While interacting with Sarah the Facebot, the robot would, for example, refer to past events that the human and the robot were both present at, or mention that it had seen a common friend the other day.
While the term ‘common locus’ in this context seemed to be limited to this interview, we believed that the concept aptly described a crucial factor for human-robot bonding. This study provides a more elaborate definition of the common locus. Specifically, we stress the importance of frequent interaction through time and a spatial proximity, both allowing frequent interaction to maintain and strengthen the common locus. We hypothesize four potential sub-components that would facilitate a common locus:
(1)
An explicit or implicit decision to treat a certain entity as a subject, at least to some degree (really believe this, suspend disbelief, or play along);
(2)
A shared space or close proximity for the human-robot interaction to happen;
(3)
Shared time or a repeating opportunity for engagement with the robot through time and;
(4)
The perception of shared life experiences which includes, for example, specific encounters, events, activities and conversations that a robot and a human are both present for, certain goals that a human and a robot both work towards and interacting with similar social contacts;
We hypothesize that these components facilitate a common locus between a human and a robot and allows people to have the impression that they have ‘shared’ memories, social contacts and experiences with the robot, which could potentially lead to the projection of life-like attributes.
The concept of common locus could provide a novel, additional dimension in establishing, understanding and describing human-robot relations, which can be especially relevant in environments that depend on close cooperation with robots [
18,
103]. If a social interaction or bond with a robot is a feature that is preferable by design, being aware that a (prolonged) proximity of a robot in the social life of a human can facilitate social engagement might in turn inform the design and usage of such a robot. Of course, strong human-robot engagement could also be used for intentional unwanted goals or cause unintentional consequences such as a scenario in which a human puts him/herself in harm’s way to ‘save’ a robot. In these types of situations, the common locus implies that the presence of robots in the proximity of people should be curbed, since merely removing anthropomorphic qualities could be insufficient to prevent human-robot engagement or bonding.
For completeness, we do not want to imply that only appearance, behavior and common locus will lead to subjectification, that would be an overly simplified view of the complexity of relationships between humans and the outside world, and how these can develop. Nor do we want to imply that common locus is a necessary condition for subjectification to arise. For example, it can also happen instantly, as is the case in many of the best cybernetic works of art. In such a context the experience better be instant, as long-term conditions for common locus are infeasible. If anything, we want to inspire researchers to look further than appearance and behavior, which are already abundantly studied, and go beyond mirroring humans in general. There will be more factors at play, including characteristics of the human rather than the robot, and these factors will also interact.
4 Methodology
In this section, we discuss our research methodology and questions, and BlockBot design and deployment. As an initial exploration of the concept of common locus, we aim to design a robotic artifact that is able to facilitate a shared space, time and perceived experience between it and a person, but displays low life-like realism in appearance or behavior (see Figure
2) and be able to be deployed independently outside the laboratory. The result of this process is the robotic artifact called BlockBot.
4.1 Research Method: In-the-Wild Field Study
Since the concept of the common locus puts emphasis on the experience of a shared life within a human-inhabited environment, we consider it appropriate to use an ‘in-the-wild’ field study approach [
7,
84,
91]. Jung and Hinds note that research on human-robot interaction has been ‘dominated by laboratory studies, largely examining a single human interacting with a single robot” [
47]. This has led to a large body of scientific research on which technical mechanisms affect human-robot interactions. However, there is a disconnect between these studies and the socially complex environments robots are aimed to be - and exceedingly are - placed in [
47]. Laboratory studies do not ‘provide insights into the aspects of human-robot interaction that emerge in the less structured real-world social settings in which they are meant to function’ [
84], and hence may lack ecological validity. Controlled laboratory studies offer helpful insights, but would benefit from being supplemented with studies on human-robot interaction outside the laboratory in the proverbial wild [
7,
84,
91].
Robot field studies have, for example, focused on how humans react to robots in public spaces that approach them [
107], listen to them [
74], greet them in hotel lobbies [
73], or give tours in museums [
46]. The humanoid robot Kaspar spent over a year in a special nursery for children with autism [
91]. Creative robotics projects such as hitchBOT [
87] or the BeggarBot [
93] have also employed an ‘in-the-field’ approach to have people interact with relevant human-robot interaction topics - such as trust between humans and robots in a natural setting. Giusti and Marti note that social robotics is ‘an extraordinary opportunity to design technologies with open-ended possibilities for interaction and engagement with humans’ [
34]. Citing William Gaver, they argue that systems that are designed to be open-ended can lead to ‘an intrinsically motivated and personally defined form of engagement’, instead of an ‘experience to be passively consumed’ [
34]. In an interview, the creators of hitchBOT mention that the overall result from their experiment led them to theorize: “that robotic technologies that afford creative shaping by their users are more likely to become socially integrated” [
104].
Obviously, there are also inherent limitations of such an open, in-the-wild set up. It is harder to carefully control conditions and gather experimental data. We do not position our study as a quantitative empirical study, proving necessary conditions. It is a qualitative empirical study at best, and less aimed at providing answers, and more at providing proof of concept and possibility, and stimulating questions and directions for theory formation and future research, both in terms of the research question and topic, as well as in terms of the in-the-wild method used. Also, as described below, we have developed multiple iterations of both artifact and method, so design research can be seen as a secondary method used. In future, studies like these could be extended with ethnographic, ethological, or ecological approaches, but this was outside of the scope of this paper.
Following these proposals, we designed BlockBot to be open-ended and ambiguous in terms of functionality, which allows participants to actively and creatively shape its social role. Placing the BlockBot in domestic settings outside of our supervision or control, aims to ensure a naturalistic engagement of people that is as close as to how they would interact with other robotic technologies in a domestic setting. We deliberately chose to not give BlockBot a social media account or a similar insight for participants to its social history - where it had been, with whom and what it had done - to ensure that each interaction with the BlockBot was as natural as possible and not informed by previous encounters. Hence, we had a more controlled environment than hitchBOT for example [
87]. Obviously, there are trade-offs that come with this approach, and we expand on our methodological shortcomings in the discussion section (Section
6).
BlockBots are presented to participants as autonomous creatures that want to meet people and travel the world, inspired by projects such as hitchBOT [
87]. Participants can keep the BlockBot as long as they wish, and then pass it on to whomever they want for the next step on their journey. BlockBots are thus hosted by an autonomously increasing chain of participants, in a natural setting, without set goals or supervision.
Qualitative data is gathered through WhatsApp communication. Participants can send messages with their thoughts, feelings and activities with the BlockBot, as well as photos to a phone number posted on the back of the box. This qualitative data is analyzed based on how participants engage with the robotic artifact and whether they ascribe identity and mind attributions to the BlockBot. We feel that WhatsApp is an ideal medium for this purpose. The association with ‘what’s up’ is not random, people generally use it to let others know how they are doing, there is a very low threshold to using it which is key and given its informal nature we are aiming to get responses that are as close as possible to the experience itself. Obviously, it is also medium where people can express themselves visually through photos, not just text, so for us it is a very useful medium for ‘ecological’ self-observations.
4.2 Research Questions
Our high-level research questions are the following: Is it possible for people to develop a relationship with a robotic artifact, in the sense that they subjectify the artifact, even if it has low levels of human-like appearance and behavior? And could common locus, i.e., the sharing of space and time, leading to perceived joint experiences and activities, be a factor in developing this bond?
Our theoretical foundations have been illustrated in the previous sections, but to single out one of the most salient ones let us return to Dennett’s intentional stance. In his book ‘The Intentional Stance’ he states: ‘Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.’ [
23]
This quote helps to organize the more detailed questions we would like answers to. At a research method feasibility level, we want to know whether people are willing to host the BlockBots, engage with them, provide sufficiently rich and frequent feedback, and pass them on. To get an idea for whether people would develop a relationship, we want to know what kind of shared activities and experiences they engage in and with what frequency. To get a sense for whether the relationship is such that there is a level of subjectification of the bot, we can follow Dennett’s quote. First, do these bots pass the initial threshold in terms of the humans deciding explicitly to treat the bot as a subject? For that we are using as a proxy whether there are indications that some form of identity is attributed to the bot. But then, to understand the relationship better, we want to better understand the kinds of attributions people make that could indicate goals, beliefs, desires, emotions, intentions, social needs, and so on. Finally, to understand common locus better, we want to zoom into these shared experiences and perhaps differentiate between participants with high and low degrees of common locus.
4.3 Preliminary Study
Leading up to and during the design process of the BlockBot, we conducted a preliminary, open-ended tinkering experiment [
54], called TouristBox to gather initial insights on BlockBot’s future design. The TouristBoxes were low-cost impromptu cardboard creatures that were left out in public on the Dutch island of Schiermonnikoog. The TouristBox was made of a cardboard box, had cardboard strips as limbs and a smiley drawn with a marker as a face. Most importantly, TouristBox held a sign that said (original in Dutch): ‘
I want to see the island. Will you take me along? Let my parents know how I am doing at: [phone number]’.
The TouristBox solicited some very interesting engagement and thereby provided a valuable initial expectation of what actions and attributions we could expect from participants. People showed motivation to save it from the rain, ‘feed’ it, touch it and patch it up. They took it along on their holiday, took photos of it at several locations, engaged in several activities and wrote about its well-being, state of mind and its location. The inanimate, anthropomorphic cardboard box managed to ‘see’ the island, meet people, and survive out in the wild for at least two weeks. Participants seemed to shape the social role mainly as a fellow traveler that joined their holiday for a while before letting the box go its own way again.
In contrast to the TouristBox, we decided not to leave the BlockBot in a public space, but rather have it passed on between participants. The reason for this was economic. Since the BlockBot was considerably more (time) costly to produce, leaving the robotic artifact in public space would potentially result in BlockBot being destroyed by people or the elements before it could generate any insights. Consequently, participants needed to adopt an active role in finding the next participant, which potentially heightened the bar for audience participation. The expected trade-off was that the BlockBot would ‘travel’ slower between participants but was in less danger of being destroyed or lost between two participants compared to the TouristBox.
4.4 BlockBot Design
There are several considerations that went into the design of the BlockBot (see Figures
1 and
2). First, the design of the BlockBot aims to minimize anthropomorphic or zoomorphic appearance and as well as its life-like behavior, not to remove it completely, and evoke robot-like connotations. The cubical body, the charging cable and the sleek monotonous color add to the BlockBot’s artificial look, comparable to a monolithic home assistant. We decided to keep the simple face as part of the design of the BlockBot, providing us with an element that provided both a minimal sense of realism of appearance and behavior, and trigger a initial decision of the host to treat the item as a subjectified object. BlockBot cannot move but it is able to change its face to a sleeping face while charging, which gives it a minimal notion of behavior. We do not add any other emotions aside from the smile to curb any further realism in the BlockBots appearance and behavior. In Section
6, we discuss the possibility that the face impacted the results of this study and suggest displaying different shapes - or nothing at all - instead of the BlockBot’s face in future studies.
Second, the BlockBot needs to have a size that would make it convenient for participants to keep it close to them, as well as make it easy to pass it on from one participant to the other. The exterior of the BlockBot is a cube of 12 \(\times\) 12 cm (4,7 \(\times\) 4,7 inch). The BlockBot’s size allows for enough space for the display and Arduino Uno micro-controller board inside the bot and is small enough for people to easily transport. A small plastic plaque has been attached to the back - in order to not get separated from the BlockBot - which reads: ‘I want to make friends and see the world. Can I stay at your place for a bit? Please hand me to a friend afterwards! You can charge me using my tail. Let my parents know how I am doing and where I am at: [phone number]’. We have chosen to write the message in English to allow a wider possible demographic to interact with it.
Finally, the BlockBot needs to be durable since it would be deployed unsupervised into human-inhabited environments. While the BlockBot prototype is chargeable, this need has been removed in the second generation BlockBots. This has been done to prevent any potential lithium-battery-induced fire hazards. The BlockBot is made from medium-density fiberboard (MDF) which has been processed with water resistant polish. The display is a Waveshare 2.7 inch e-ink, which is operated by an Arduino Uno micro-controller board. A RobotDyn RTC (real-time clock) with a three-year battery life was added to keep track of time. The USB charging cable now leads directly into the BlockBot. When the BlockBot is connected to power, it checks the independently powered clock to check which face to display. After 6:30, it wakes up. After 22:00, it goes to sleep. When the BlockBot is not attached to power, the e-paper display retains the image. This allows for new code to be uploaded on the BlockBots while assembled, making its capacity as a research platform more versatile. On the downside, the BlockBot will not switch faces unless it is plugged into power. However, the narrative that it should be charged at night potentially solves this problem.
Sourcing components for the BlockBot was straightforward. All technical components were acquired from electronic stores. The MDF exterior was constructed by two woodworkers. However, no extensive woodworking skills are necessary to construct the BlockBot exterior. Any future researchers aiming to recreate the BlockBot for future studies would be advised to leave a side of the BlockBot - in our case the back - dismountable in order to facilitate troubleshooting.
4.5 Deployment
Three BlockBots were deployed in Amsterdam, The Netherlands. To break this pattern, a fourth was released in Nijmegen, close to the border with Germany. The initial seed participants were selected from the author’s own social circle and differed in the social make-up of their domestic setting: a young family, a couple living together, a student, and a self-employed person with roommates. All initial seed participants were within a 22 to 32 age range. The demographic of participants was outside of the authors’ control after the initial seed participants had passed the BlockBot onto a new participant of their choice.
To ensure that the initial participants were not biased in their engagements with the BlockBot, they were given only limited details about the goal of the study. The initial participants were aware that the BlockBots are part of a study on human-robot interaction in human-inhabited environments but were not instructed on the specific aim of this study, such as its research question or hypothesis. Neither were participants specifically encouraged to make attributions of identity or mind. The initial participants were simply instructed to host the BlockBot for a period of their own choosing, encouraged to send an update on the BlockBot via the number on the back of the box in whatever form they chose and to pass the BlockBot to another host that they felt would pass it on themselves as well. To subdue concerns about the BlockBot infringing on participants’ privacy, the initial participants were informed to some degree about the BlockBot’s inner workings, such as the fact that it did not have any sensors and did not store any data. Participants were also told they did not have to worry about the BlockBot’s battery running out but were encouraged to charge it occasionally. It was emphasized that the initial participants could do whatever they liked with the BlockBot short from destroying it. Concerning passing on the BlockBot, the initial participants were requested to not mention that the BlockBot is part of a study or project, but that it had been traveling for a while before them already. This would ensure - as far as possible - that the next series of participants interaction was as natural as possible and as uninformed about the BlockBot as possible. We observe no apparent differences in the data gathered from the initial seed participants or the participants after that would imply that the initial participants were biased in their engagements.
5 Results
In this section, we present the results of our study. To re-iterate, the goal of our research is to explore in an open-ended manner whether it is possible for people to develop a relationship with a robotic artifact, in the sense that they subjectify the artifact, even if it has low levels of humanlike appearance and behavior. And also explore whether common locus, i.e., the sharing of space and time, leading to perceived joint experiences and activities, could be a factor in developing this bond.
The open-ended nature of this study allows for indefinite gathering of qualitative data, since there is no clear end goal to the BlockBot’s journey. The first BlockBot prototype was deployed for six weeks from July 13, 2020 until August 26, 2020. The second generation of three BlockBots were deployed in the field on August 18, 2020, two of which are still currently deployed. For practical purposes, we have included results until October 1st 2020 in this paper. See Table
1 for the total number of participants, and text messages and photos received.
In the end, each of the four BlockBots had a distinct ‘journey’. One BlockBot did not move from its starting location at all, one roamed in the vicinity of Amsterdam, another around some of the major cities and ending up in the north of the Netherlands, while the last one traveled through Belgium and is currently back in the Netherlands. Based on participant communication referring to failed plans of traveling countries such as France and Norway, we believe that the coronavirus pandemic curbed the BlockBots potential trajectory. One BlockBot was even affected by the COVID-19 pandemic and was stuck in quarantine for ten days due to one of its roommates testing positive.
The results show that most participants are disposed to communicate about the BlockBot (see Figures
3–
5), yet to varying degrees. While some participants only send a few messages and photos or a series of minute long videos, others share detailed page-long reports on how they experience the BlockBot’s presence and how this changes over time. We observe a correlated trend between the volume of messages a participant sends and the amount of attribution that we observe in their communication. Since participants are not instructed to communicate about their engagement with the BlockBot in any particular manner, we suspect this difference is due to individual difference between participants and the intensity to which they respond to the BlockBot’s presence in their life. One observation that we can make is that participants seem to be disposed to communicate with a decreasing frequency and quantity over time (see Figure
6). Often participants send one or two messages with photos after which either the frequency and quantity drops, or they pass it along. Even with participants that are highly frequent communicators, we note a decline in the frequency and quantity of communication over time. We discuss this potential novelty effect further in the Discussion section.
Through analysis of the received qualitative data, which consists of messages, photos, and videos, we gain a sense of how participants engage with the BlockBot. We discuss our data across three aspects. First, to what extent does the BlockBot manage to establish a common locus with its host. In other words, is the BlockBot actually in proximity to the people that host it, do participants spend time with it and do participants take the BlockBot along for shared experiences? Secondly, what do people report about the BlockBot and their experiences with it? Specifically, we discuss this along two domains that we identified in the available data: identity attributions, which could be associated with the decision to treat the object as a subject, and mind attributions, and the interactions between the two. We also explore the relationship between common locus and these attributions and activities.
5.1 Common Locus
The BlockBot appears to successfully establish a common locus. The robotic artifact is present in the same spaces as participants, over an extended period of time and is present during events inside and outside the home. At times, participants take the BlockBot with them, i.e., they actively maintain common locus. Participants keep the bot in their homes, most often their living room and sometimes bedroom. From the received messages and pictures, we can observe that participants move the BlockBot around the house and take it, for example, to the balcony to sit in the sun with them, to a study to be present while studying or to bed with them at night (see Figure
7). Others seem to leave the BlockBot in one spot in their living room.
Participants also often physically move the BlockBot along with them to maintain a common locus outside of the domestic setting. For example, one participant received the BlockBot in one city where she hosted it for a couple of days and later moved it along with herself to another city. After several weeks, she then moved it back to the original city, where she passed it on. Another participant took it on a trip to a farm in Belgium, back via several Dutch cities and on several other trips. To name a few other examples, the BlockBot joined participants at a construction job, a university robotics laboratory, a barbecue in the park and for a walk in the woods. Taking the BlockBot outside and extra-domestic activities are reported by over half of the total participants (see Figure
7). These examples showcase that the BlockBot solicits in some participants an active engagement in maintaining a common locus by keeping it in close proximity inside and outside a domestic setting, and a tendency in participants to turn experiences or activities into joint experiences or activities.
5.2 Identity Attribution
What do people attribute to the BlockBot? Almost all the participants attribute some sense of identity to the BlockBot. The received messages show that the vast majority of participants refer to the BlockBot with a gendered article (he/she), with a majority attributing a male gender (see Figure
8 for attributions at message level). So far, participants write almost exclusively about the BlockBot in the third person, referring to it as he, she or the (ro)bot. Only one participant refers to it as ‘it’ and one other communicates in the first person and hence refers to the BlockBot as ‘I’.
The vast majority of participants also give the BlockBot a name. Some names clearly refer to an assigned gender, such as Brenda and Jules, while others reflect more on the robotic nature of the BlockBot, for example, Boxxie, Robbie and Botje (Dutch for ‘little Bot’). One participant communicated that she had tried several names before landing on a male name and questioned why she felt that particularly a male name fitted the box. She reported asking people in her social circle to help her name the BlockBot but stuck with her initial choice. Interestingly, the data show that names are also not necessarily retained between participants. For example, after Jules came Rust and after Brenda came Boxxie. This could indicate that participants mainly shape the identity of the BlockBot themselves.
Overall, at participant rather than message level, 71% of participants assign a gender or communicate in first person, 71% assign a name or communicates in first person, and 86% assign at least a gender, name or communicates in first person.
5.3 Mind Attribution
In this section, we discuss mind attributions that are made by participants about the BlockBot - agency, emotions, friendships, preferences and intentions - in order of their perceived quantity in the data at message level (see Figure
8).
Participants make several different attributions that refer to the BlockBot’s ability to act. An often-made attribution is the ability to see. For example, BlockBot likes to observe the room and has seen several cities/sites. Another participant referred to the BlockBot as ‘gaining experience’ and ‘growing up’, implying a certain ability to learn. The BlockBot was also taken to a gym where it was attributed ‘being sporty’, ‘running contests and squash competitions’ and ‘learning new sports’. Generally, participants do not mention that they take the BlockBot to places or put him in certain location. Instead, they use a more action-oriented language, as if the BlockBot is able to move independently. For example: ‘Jules has seen three different cities’ or ‘He has been to the forest with us and we have played a game together’ instead of being taken there.
When attributed an emotion, it is interesting to note that in this experiment the BlockBot is ascribed only positive emotions: happiness and hope. For example, one participant wrote that the BlockBot ‘was happy at the BBQ because it was fun’. Another wrote that BlockBot ‘hopes’ to leave town soon. The latter could be marked as an attribution of emotion (hope) and desire or intention (leaving town).
Several participants attribute a quality of friendship between BlockBot and animals, toys and statues with an animal likeness: a cat, a stuffed animal, a statue and a pig. One participant referred to a BlockBot and a cat as ‘buddies’, another to the BlockBot and a pig as ‘friends’ (see also Figure
4). The BlockBot has also been attributed certain preferences, such as sitting in the sun on the balcony, sitting with the plants or liking to observe people. The intentions that have been attributed to the BlockBot by participants mainly seem to align with its deployment narrative: it wanted to see the world. We received several messages which aimed to communicate the BlockBot’s wish to travel on. The above attributions could have been influenced by several other factors such as BlockBot’s face and deployment narrative that states that it wants to see the world. We discuss the potential effect of other factors further in Section
6.
5.4 Interactions between Common Locus, Activities and Identity, and Mind Attributions
So far, our analysis has mainly been ‘univariate’: discussing each observed variable separately. In this section, we will discuss possible interactions and associations between the variables themselves. Although we are working with quite a a low sample size, these interactions could provide us with speculative, yet interesting insights that could be confirmed further in more extensive follow-up research.
Taking the BlockBot outside could be argued to be an effective way to maintain or build a common locus, since proximity is preserved even outside of the domestic setting (see Figure
9). It will be interesting to see whether people who maintain common locus by taking the bot outside with them also make richer attributions. This does not prove causation, but even if there is a correlation this would be interesting. When we sort our data in two groups, those that do take the BlockBot outside and those that do not, several possible correlations can be observed in terms of attribution.
Firstly, almost all of the participants that take the BlockBot outside, also take the BlockBot on an activity. This can be argued to be unsurprising since most activities take place outside of the domestic setting. Regardless, it shows that most of these participants do not simply put the BlockBot on their balcony or in their garden but also take it on activities.
Secondly, participants who take the BlockBot outside show higher percentages of identity and mind attribution (see Figure
9). For example, the majority in this category attribute a name (75%) and gender (62.5%) to the BlockBot in comparison to only 28.5% and 50%, respectively, in the not-outside group.
While the sample data is low, these two interactions could suggest a relation between actively maintaining a common locus through close proximity outside the domestic setting and project mind attributions. This would be in line with our hypothesis, since we do observe that more attributions are made by participants who take the BlockBot outside - and preserve a common locus - than those who do not.
When we sort the data on participants that name the BlockBot - which provides an indication that a participant wants to build a certain bond or relation with the robotic artifact - one interesting observation that emerges is that there is an apparent hierarchy to the attributions made to the BlockBot (see Figure
10). We observe that, almost exclusively, identity attributions outnumber mind attributions. Aside from the cases where no attributions are made, if there is an attribution of mind, a participant will have given the BlockBot a name and/or gender. This potentially indicates that an attribution of identity may be a necessary condition or enabler to make further mind attributions.
In total, we therefore observe three clear correlations. First, if a participant makes an attribution of identity, it is highly likely that a participant will also make mind attributions and take the BlockBot on activities outside. Secondly, if a participant makes one type of attribution of mind, it is very likely that he or she will also make another mind attribution: attributions begat attributions. Lastly, if a participant takes a BlockBot outside, it is highly likely that he/she will also make mind attributions.
However, these correlations could potentially be explained by an alternative effect. Some participants might just be more inclined to behave in an active manner towards the BlockBot. In other words, there might exist an underlying variable that causes participants to behave in a varying degree of engagement towards the BlockBot, ranging from barely interacting with the BlockBot at all to naming it, taking it on activities and ascribing mind attributions to it. This is not unlikely, since only half of participants project mind attributions or take the BlockBot outside.
For example, when sorting participants on whether they make agency attributions to the BlockBot, we see that only half of participants that had taken the BlockBot outside or on an activity also made an agency attribution (see Figure
11). This could suggest that taking the BlockBot outside or on an activity might have a weak or no influence on mind attributions, since the results could be explained as pure chance: around half of participants could be susceptible to attributing mind qualities to the BlockBot. However, this would raise the question what exactly causes some participants to be susceptible to this kind of active behavior when interacting with the same robotic artifact. One study, for example, showed that people with higher levels of empathy were more positively influenced by a robot with a story than people with lower levels of empathy [
21]. Perhaps, the BlockBot triggers a certain personality trait in over half of the participants that causes them to engage in a more active way towards the BlockBot compared to participants that rate less strong on this personality trait.
5.5 Summary of Findings
In conclusion, the results show that most participants demonstrate behavior that can be interpreted as anthropomorphizing the BlockBots, consistent with Dennett’s intentional stance. Despite the fact that the BlockBots score low on human appearance and behavior, the vast majority of participants decides to treat these objects as subjects, as measured by assigning some form of identity (86% of participants), with over half of the participants also projecting mind attributes on to the BlockBot, such as agency, emotion, intention, preference, etc. Hence, this data is indicative of the BlockBot passing a social threshold and a majority of participants experiencing its presence as that of a social agent. Also, participants who name the BlockBot are more likely to make mind attributions.
Participants are motivated to engage with the BlockBot and suggest that the BlockBot successfully establishes a common locus with participants. It is present in the domestic proximity of participants, and a subset of participants show motivation to keep this common locus intact when they move to a different location in or outside the domestic setting, by taking the robot with them. For this group, we are also seeing higher identity and mind attribution rates. This may not imply causation, but at least is an indication of correlation between common locus and attribution.
One should consider the relatively small number of observations and uncontrolled conditions that come with the set-up of an open in-the-wild study, but the additional, more qualitative information gleaned from the texts and images aligns with the thesis that mind and identity attributions are made to BlockBot, and that common locus plays at least some role in this.
6 Discussion
In this section, we provide a deeper interpretation of the results and discuss the implications for the field of human-robot interaction ‘in-the-wild’. We also discuss several alternative factors that could explain our data, methodological limitations of our approach and opportunities for future studies.
This paper theorized that the act of living or working in close proximity, or cooperation with a robot for an extended period of time can facilitate an experience of a robot as a social agent, regardless of whether this robot possesses low appearance and low behavioral social cues. As an initial exploration of this hypothesis and the common locus as a relevant variable, we have positioned a minimal robotic artifact called BlockBot in close proximity to people’s daily lives where they could engage with the entity in a naturalistic and unsupervised manner and self-report on engagements made with the BlockBot.
6.1 Interpretation
The presence of the BlockBot in domestic settings and the full control of the participants over BlockBot engagement provide a unique research setting. The data shows that people are motivated to engage with the BlockBot and communicate about the BlockBot in written and photographic form. The communication about the BlockBot by the participants could be characterized as more similar to that of a social agent than that of a lifeless object. The robotic artifact seems to fulfill the role of a complacent (happy to join for all activities) but restless (wanting to leave and travel on) guest in the lives of their hosts.
The data indicates that the BlockBot does succeed in establishing and maintaining a common locus with participants and that participants are prone to engage socially with the BlockBot. Participants show motivation to keep the BlockBot inside their homes, in close proximity to their daily lives, and maintain a common locus with the BlockBot between different locations and even take it along activities outside the domestic setting.
Of course, a common locus between BlockBot and participant is not a given. One BlockBot journey is particularly descriptive of this phenomenon. BlockBot A was positioned into the domestic setting of a young family (a couple and a young son) living in Amsterdam. After initial communication about the young son with the BlockBot, communication ceased. Eventually, about four weeks later, we received an apology message stating the participant had never passed on the BlockBot, had already too many other ‘creatures’ in his life (girlfriend, child, cat) and apologized for being such a bad parent. Some other participants have also communicated about feeling guilty that they are not taking care or loving the BlockBot as much as they would like to.
Most participants, however, do make the effort to include the BlockBot in their daily lives outside of their domestic setting, which goes beyond its basic request to be hosted and passed on in due time. Participants feel motivated to bring it along to their jobs or on leisure activities during their time off work, take it to social events such as dinners and drinks, travel around with it, even across national borders. This shows that participants feel motivated to share events, spaces, and contacts in their life with the robotic artifact and, hence, build and maintain a common locus between the two.
Furthermore, participants make identity and mind attributions to the BlockBot. Participants almost uniformly refer to the BlockBot as a gendered entity, to which the vast majority give a name. Over half of the participants also ascribe a mind attribution to the BlockBot. One interesting observation is that identity attributions to the BlockBot, such as gender and/or giving the bot a name, seem to strongly outnumber mind attributions, such as agency, emotion, relationships, preferences, intentions. Almost exclusively: if there is an attribution of mind, a participant will have given the BlockBot a name and/or gender. This could point to a potential hierarchy within the attributions made to the BlockBot. Perhaps, an attribution of identity is a relevant factor to communicate about attributions of mind.
However, as we already mentioned in Section
5, the potential cross-correlations between variables might be due to a underlying personal factor (or factors) that triggers some participants to strongly engage with the BlockBot, while other do not seem disposed to this behavior.
The apparent hierarchy in attributions made about the BlockBot, however, is reminiscent of Dennett’s intentional stance, as per his quote in Section
4.2. [
23]. The ambiguous nature of the BlockBot’s function, could be argued to force participants to adopt an active role in shaping their engagement with the BlockBot. Initially, the object is framed just enough as a creature so that the participants decide to treat it as a subjectified object and rational agent. Participants formalize this decision by assigning identity attributes such as a name or a gender. Of course, participants know it is an artifact, but as part of the simulation that play is, suspension of disbelief kicks in, and participants literally take an intentional stance. They deduce its beliefs, desires, and goals from the instructions on the back and attribute preferences and behaviors that fit these goals. In other words, after an initial conscious decision to subjectify the object in a playful setting, common locus then leads to shared experiences, which leads to more attribution and the bond forms, perhaps further reinforced by the sunk cost of time invested.
Interestingly, what participants ‘predict the agent will do’ is in some cases not only projected onto the BlockBot but has to be in some cases also acted out by the participant with the BlockBot. If a participant predicts the BlockBot wants to sightsee, they will need to bring BlockBot to the world outside. This further increased the effort vested in the narrative and ‘evidence’ that BlockBot is a creature. This might explain the observed correlation between participants taking the BlockBot outside and on activities and a relative high amount of identity and mind attributions.
In conclusion, we observe that participants are disposed to anthropomorphize the BlockBot - to make attributions of human qualities to the BlockBot - and show motivation to share time, spaces and events with the BlockBot. This suggests that the BlockBot has passed a social threshold and is experienced, to a degree, as a social agent. These observations are in line with our hypothesis and, accordingly, point to the relevance of the common locus.
While the results of our exploration are not conclusive, the data could suggest that a common locus between a person and a robot in itself functions as an important catalyst for a robot to be experienced as a social agent, assuming there was just enough of a trigger for the human to initially decide to treat the artifact as a subject to some degree. A robot’s proximity to a human’s daily life can give the sensation of sharing time, spaces, and activities with a robot, which influences the experience of a robot as a social agent, even if a robot possesses low life-like realism in its appearance and behavior. This is not unlike how humans befriend others who happen to live next door, work in the same team or are a member of the same club. With an increase of robots, robotic artifacts, and behavioral objects in the lives of people, what will be the consequences of this perception?
6.2 Implications for Human-Robot Interaction
Conventionally, when a human-robot interaction is desired, the interaction is not the final goal in itself, but is often there to capture attention or affection and redirect this for a certain purpose. One example is therapy: interaction with a social robot seems to improve social engagement of autistic children in human-human interaction [
18]. Another example is teamwork: human-robot teams perform better when humans are emotionally attached to the robot [
103]. The common locus could provide a new factor in establishing human-robot relations for these types of cooperation. If a social interaction or bond with a robot is a feature that is preferable by design, being aware that a (prolonged) proximity to a robot in the social life of a human can facilitate social engagement, might in turn inform the design and usage of such a robot.
Likewise, when robots are used in sensitive contexts, such as care, one needs to be very aware of all the factors that influence interaction and relationship building and their side effects, so not just in terms of appearance and behavior, but also common locus. In addition, we can also imagine scenarios in which we explicitly want to prevent these types of social engagements towards robots. While attachment is a valuable quality in human-robot teaming, these attachments could potentially endanger human lives when an attachment to a robot drives a soldier to position himself in harm’s way to rescue its robot [
10]. In those cases, if such a social or emotional engagement is undesirable, one could argue that a robot should not only be ‘de-anthropomorphized’ in its appearance and behavior, but a common locus between a human and a robot should be avoided as well, for example by turning it off and storing it out of sight with other equipment when not deployed. Naming the object should be avoided, the use of non-anthropomorphic language should be encouraged, and specific bomb disposal robots should be frequently rotated across units. There also exists a sizable mistrust concerning the use of robots in the care and therapy of children, the elderly and the disabled, and a reprehension to the introduction of robots in other areas such as education, healthcare, and leisure [
18]. One concern is that the authenticity of human-robot attachments has possible negative effects on human-human or human-animal relationships (for further reading see [
95] and [
14]). The idea that a robot’s proximity to one’s daily life could lead to social engagement and even a certain bond, could provide a basis for some to argue that we should keep robots away from humans in order to prevent human-robot attachments deteriorating human-human relations. Obviously, as already frequently mentioned, our research is open ended, so less controlled, and qualitative, but specifically for sensitive areas it is already relevant to point out that common locus can be a
potential risk to be looked into more for any of these high-risk implementations.
6.3 Known Limitations
The results presented in this study are by design indicative and explorative, and this paper does not claim to provide any definitive conclusions. In this study we used an open-ended and unsupervised methodology to study how people make sense of a robotic artifact. This choice came at the expense of several of the benefits that a laboratory study offers and limits the number of factors that we can control as well as the quantity and quality of the data that we gather. Accordingly, we encountered several limitations of our methodology, which we will discuss below.
Firstly, the addition of new participants is slow. It takes considerable time for people to host a BlockBot and pass it on. This can be explained by the fact that people are requested by the BlockBot to be passed on to a friend or an acquaintance, which heightens the bar for movement between people.
Secondly, the quantity and frequency of data is irregular and differs greatly between participants. While some participants seem very disposed to communicate about the bot, others seem less inclined to do so. Accordingly, some participants send us two messages and a single photo before passing the BlockBot on, others keep it for three weeks and write three pages full of observations and experiences with the BlockBot. When a participant who receives the BlockBot does not communicate to the phone number, we cannot gather data and we lose track of the BlockBot’s location. Recording the data was not as problematic as receiving data from participants. Once messages, images or videos had been sent to the mobile phone number presented on the BlockBot, this data was labeled and stored for further analysis.
Thirdly, the quality of the data that people send can vary between participants. While some take a very active role towards the bot and attributing life-like qualities to it, others merely describe where it is or do not report much about it at all. This leaves the potential that some engagement was under-reported. Finally, it is important to realize that people that participated in this study chose to do so voluntarily and were willing to host the BlockBot, which could suggest a caring, pro-active nature or a preexisting interest in robots. We are also aware that the demographic make-up of our initial participant pool is not representative for all age groups and ethnic backgrounds. Cross-cultural studies have shown that there are differences between cultures on how robots are perceived [
59,
86,
94].
We acknowledge the possibility that asking participants to self-report their attributions and actions with the BlockBot might not paint a complete picture of their social interaction with the robotic artifact and is bound to encourage messages that contain attributions. For this study we have restricted ourselves to the information we received through WhatsApp on purpose, without any meaningful stimuli from the experimenters. A potential solution in future research, that would preserve the initial unsupervised interaction between participants and the BlockBot, would be to actively interview participants about their engagement through interviews or surveys after they have hosted it, or to place a BlockBot at one household for a set amount of time and conduct interviews with participants about their connection to the BlockBot during and after its stay. This approach could reveal more anecdotes, attributions or activities that were not reported on.
Also, we are well aware that participants may want to ‘play along’, actively choose to suspend their disbelief and do not ‘really’ believe the bot has a mind, desires, emotions, intentions and so forth. That said, we kept the task as open ended as possible, without implying that this was a research experiment in human robot relationships. We also do not make any specific ontological claims as to whether participants truly attributed a mind, yet we think it is interesting that people do this even in a purely playful context and see play also as a form of reality. At the end of the day, as Shakespeare put it: ‘All the world is a stage’.
6.4 Future Studies
While our data can be argued to support our hypothesis, the question remains to what extent other factors might have influenced the results. We discuss several alternative or complementary factors: a possible framing effect, a novelty effect, the BlockBots appearance, make-believe play and to what extent the social engagement we observe in the results differs from object attachment. This list is not meant to be exhaustive. We will discuss these possible factors below and make recommendations on how to address them in future work. With an identical set up but just a larger fleet of robots traveling over a longer period of time, we could explore the topic more deeply and draw firmer conclusions.
6.4.1 Framing Effect.
One aspect that potentially influences the results is the possibility that the deployment narrative of the BlockBot already frames participants to perceive it in a certain manner. The BlockBot is presented as an entity that wants to travel the world, stay with people and has parents that people can report to. This narrative mainly functions as a forward-propelling mechanism for the BlockBot in order to increase the number of participants and subsequently to increase the amount of data that we receive. However, does this narrative also function as a framing device that influences the participants’ perception of the BlockBot from the start? This could be studied by experimenting with different framing variants.
The exact mechanisms and effect of framing on robot perception are not yet fully understood. While some studies have shown how (anthropomorphic) framing can affect how people perceive robots [
20,
21], other studies found that anthropomorphic framing had no effect on the human-like perception of the robot [
72]. Other times, effects are subtle. One study found subtle differences in children’s gaze behavior between a robot that was framed as a social agent or a machine-like being [
100], but this difference in perception was not present in the children’s evaluations pre-test and post-test [
101]. Another study showed that while framing does affect participants mind perception of a robot in a laboratory setting, this effect is hard to replicate in real-world studies [
98].
It is important to point out that participants are not explicitly encouraged or discouraged to anthropomorphize the robotic artifact. Furthermore, the majority of attributions made to the BlockBot are unrelated to its deployment narrative. For example, the narrative does not provide the BlockBot with a gender, name or any preferences. However, one can argue that - even if the attributions are unrelated - because BlockBot is presented as robotic artifact with a goal, that this lowers the threshold for participants to make attributions compared to a situation in which there was no ‘framing’ to begin with. Hence, we acknowledge the possibility that the deployment narrative of the BlockBot frames how participants view the robotic artifact. In future studies, we could, for example, compare different narratives with which we deploy the BlockBot. This could mean different narratives on the back of the BlockBot or different narratives told to the initial participants.
6.4.2 Visual Appearance: Face.
Due to the open-ended nature of this study, which did not isolate single aspects of the BlockBot, the question arises as to whether the BlockBot passes the social threshold based on the common locus or that its arbitrarily low realism in appearance or behavior was ‘high enough’ for it to pass the social threshold based on those qualities (or a combination of them). Critics could, for example, point to the fact that the BlockBot displays a ‘face’ as the main factor as to why we observe participants ascribing identity and mind attributions to the robotic artifact. In general, to push the boundary further, it may be interesting to experiment with lowering visual appearance by simplifying, neutralizing or even removing the smiley face.
A similar argument as with the framing effect can be made here. While there are attributions made that seem to be more related to the BlockBot’s face (the ability to observe, for example), there are also other examples that seem unrelated to this feature, such as having certain preferences. An alternative explanation is that, because the BlockBot is almost always positioned in a safe domestic environment and not exposed to any danger, it might not feel anything but happy in the eyes of participants. In our preliminary study with the TouristBox, which was left outside to be picked up by passersby, some participants did make mind attributions of emotions such as fear and worry regardless of its fixed smiling face.
It also does imply that the same BlockBot would be able to pass a social threshold if participants interact with it in a laboratory setting. Or that the same BlockBot without a face would not be able to pass a social threshold in a common locus setting, which our hypothesis states it would. Due to the nature of our study, we cannot exclude this possibility with certainty and we concede the possibility that the presence of the face could potentially lower the threshold for participants to make attributions that are unrelated to its appearance alone. This establishes the need for a future study in which we deploy a BlockBot with even more abstract faces, such as removing the mouth, or without a face on its display and see whether it would be possible to generate similar results that indicate that the BlockBot has passed a social threshold.
6.4.3 Novelty Effect.
The observed results could potentially be influenced by a certain novelty effect of having an unfamiliar, ambiguous robotic artifact in the social setting of the home. A novelty effect would cause participants to initially be very disposed to engage with a robot and lose interest after its novelty has ‘worn off’. This type of effect has been observed in previous studies [
50]. One study which aims to provide a formal model of anthropomorphism considers anthropomorphism as a dynamic concept that evolves over time. This model theorizes that at the start of a human-robot interaction, anthropomorphism spikes due to a novelty effect, before familiarization stabilizes this tendency, only to spike up again due to disruptive and surprising robot behavior [
56]. In future work, one may introduce a variant where at very low frequency (say, once a week) the bot demonstrates a certain behavior to keep stimulating engagement or passing on the device (for instance in case of no movement), though this may reduce the open and unbiased nature of our set up and actually increase novelty bias.
In our current data we do observe a frequency pattern that could be interpreted as a novelty effect (see Figure
6). Participants often communicate most about the BlockBot in the early days of having received it or communicate about it on a relatively high frequency at one moment compared to moments later in time. Often participants send one or two messages with photos after which either the frequency drops or they pass it along. Even with participants that are highly frequent communicators, we note a decline in the frequency and quantity of communication over time. One participant that had a BlockBot for over two weeks, notes that her BlockBot “Jules” slowly lost his “magic robot powers” and became a part of the interior, akin to a plant, lamp or printer. It is possible that a novelty effect influences an initial high level of anthropomorphism which in turn affected the type of communication we received from participants.
6.4.4 Make-believe Behavior.
An alternative explanation of pro-social behavior that we observe in the communication with participants about the BlockBot is that these participants engage in a sort of make-believe or play behavior. Such behavior is believed to perform a crucial role in children’s development, emotional health, learning and self-regulation [
6,
36,
57,
76]. Participants might be aware that the simple robotic artifact possesses none of the mental states they ascribe to it, yet they suspend their disbelief or engage in make-believe play. In that sense, this playing along behavior does not make the results of our type of studies less real, future research can actually provide additional insight into why people are willing to adopt an intentional stance.
Several authors have applied this framework onto human-robot interaction and draw from the work of philosopher Kendall Walton and his theory of make believe behavior [
25,
83]. Walton argued that people more so ignore their disbelief then suspend it. Make-belief, he proposes, entails the creation of a fictional world that participants inhabit. People use objects as ‘props’ to generate such fictional truths [
25]. When making attributions to the BlockBot that participants might know are false, they are considering, what Rueben and colleagues call, ‘fictional states of affairs’ when doing so [
83]. Within this ‘Waltonian account’, interacting with the BlockBot, or any robot for that matter, is akin to engaging with other media such as literature, theater, or film.
This perspective suggests that the observed behavior is the result of the BlockBot being a convincing piece of fiction. As Novitz wrote: “To believe fiction, we have seen, is to be deceived by it, and while deception may promote an appropriate emotional response, it can never promote a proper understanding of the work. To disbelieve or to discount the work, on the other hand, prevents us from acquiring those beliefs necessary for an appropriate emotional response to it. Rather than respond to fiction by believing, disbelieving, or discounting it, one must respond imaginatively by making-believe. Such imagining, we have seen, is for the most part derivative, and involves thinking of or considering the fictional world described by the author without a mind to the factual vacuity of his descriptions. It is this which allows us to acquire beliefs about creatures of fiction which are capable of moving us”.
However, we should be wary, as Jane McGonigal points out, to be more convinced by the participants ’performances’ then they are convinced by their own make-belief [
68]. Writing within the field of pervasive gaming, McGonigal refers to the practice of ‘users’ staging, performing and playing along with an unfolding experience as a performed belief [
45,
68]. Jacobsson suggests that such performed belief are fundamental and catch ‘an essential part of how ... owners of robots appear to advance and enrich their experience’ [
45].
Finally, we can ask what motivates adults to engage in this behavior, through what mechanisms this make-believe play is reinforced or challenged and what the potential instrumental value of such behavior could be. The intentional stance provides a possible answer. It is called a stance for a reason: it is not necessarily the case that the human needs to believe the other ‘really’ has goals, beliefs, intentions, and so on. The human just actively decides to treat the other as a subject, for example because it will make it easier to predict its actions. Given this, the intentional stance works equally well in ‘simulated realities’, like play, as well as in reality. In other words, intentional stance combined with make-believe behavior and play pose promising avenues for future research.
6.4.5 Object Attachment.
A final question worth considering is to what extent the observed social engagement towards the BlockBot differs from other forms of object attachments. Robots are usually understood as mechanical artifacts that possess a level of autonomy, agency, and choice [
49]. The BlockBots presented in this study possess none of these qualities. Hence, why we have referred to the BlockBot as a robotic artifact, as opposed to a robot. However, has our methodology taken such a minimal approach that we have left the field of human-robot interaction and have entered human-object relations?
Objects, tools, and machines function beyond their practical application also as a formative agent of social complexity and experience [
3], and function as a ‘major contributor and reflector of our identities’ [
5]. Cars, for example, facilitate the formation of memories through road-trips and holidays, contain an idea of ‘freedom’, ‘responsibility’ or ‘status’ and, through years of usage and upkeep, provide users with a certain sense of identity and a potential emotional bond [
3]. In this sense, we relate to objects as an extension of our own identity. It is well observed that people from across different age ranges can form strong attachments to objects. Children, for example, often have a ‘transitional object’: a favorite stuffed animal, blanket, or toy that they form strong and persistent attachments to (however, this is not a universal event in child development) [
60]. Adults might possess certain emotional objects for which value is not (primarily) derived from its function, but which we find valuable because of what we attribute to it: the watch of a deceased family member that holds certain memories or a medal that signifies one’s competitiveness, physical qualities and persistence. These objects often feel like a part of our identity and some would feel as if they lost a part of themselves if these emotional objects were to be lost or destroyed.
There is a difference between objects (or even living entities) that may simply be regarded as objects, such as a shoebox, a mail sorting robot or an avocado, or even objects one has grown attached to such as house keys and medals, and objects that are subjectified, artificial creatures such as a teddy bear or certain robots like our BlockBot. The difference is in how these artificial creatures ‘appear’ to us and in which ontological category we place them. Damiano and Dumouchel note that people across age groups perceive robots as “ambiguous objects” in relation to traditional ontological categories [
19]. Robots seem to inhabit a place on a spectrum between alive and not alive, sentient and not sentient, intelligent and not intelligent [
48]. We observe this ontological ambiguity in the results when we analyze what participants portray the BlockBot as (see Figures
4,
5 and
12). Participants do not exclusively place the BlockBot in an ontological category of living or non-living entities, but seem to associate it with humans, animals, plants, toys and inanimate objects. While someone can get attached to an object, this does not imply that this person will regard this object as another entity or subject. You can grow attached to a watch, but this does not mean that you would consider it another entity.
However, this divide seems to be a highly individual and dynamic process. For example, while some people readily attribute moods and a certain persona to their car, others do not have this tendency whatsoever. For some, a car is nothing more than an object, while others grow attached to their cars, give them an identity and attribute emotions to them. Therefore, when considering a difference between robot and object attachment, it is perhaps more useful not to make a distinction between objects, machines, and robots, but rather between those entities that pass a social threshold and those that don’t solicit this perception. In other words: those objects that people ‘subjectify’ and those objects that people ‘objectify’.
This raises the question: does a mechanical artifact need to possess certain qualities to earn the moniker of a ‘robot’ or is it sufficient that a human merely believes or performs that it has those qualities? One could argue that if a human believes an entity has the capacities of a robot, it is irrelevant, when studying their engagements, that the entity they engage with actually possesses them. As Coeckelbergh puts it: “the ‘content’ that counts here is not what is ‘in the mind’ of the robot, but what humans feel when they interact with the robot” [
15]. In that sense, a future line of research could also take characteristics of the hosts into account, alongside robot (appearance, behavior) and system (common locus) characteristics.
7 Conclusion
Science fiction has provided us with an image of a future cohabited by humans and robots. While robots are far from the autonomous entities they are portrayed to be in various media, robotic technologies are increasingly present in our daily lives and the spaces we frequent and inhabit, forming bonds and attachments with humans. The presence of robots in a human-inhabited environment provides an exciting new frontier for the research of human-robot interaction and relationships in the wild. This paper aims to contribute to this novel body of research. Specifically, the aim of this study is to introduce the concept of the common locus as a relevant variable for robots to pass a ‘social threshold’, after which they are perceived not as a machine, but as a social agent. We theorized that, aside from its realism in appearance and behavior, the presence of a robot in close proximity to a person’s daily life, space and experiences could be a relevant factor for a robot to pass this social threshold. This may inform robot interaction research and design, both for situations when bonding is desired versus when it should be avoided.
We designed a robotic artifact called the BlockBot that does not make a strong claim of realism of appearance or behavior but is aimed to facilitate a common locus with a participant. We released four BlockBots into ‘the field’ to be hosted and passed on by people in an unsupervised, open-ended experiment to study what kind of human engagement would arise based on the robotic artifact’s presence in the participants’ domestic setting. We theorized that, regardless of its lack of realism in appearance or behavior, the BlockBot would still be able to pass a social threshold above which participants would perceive it as a social agent.
Although initial data is limited and of qualitative nature, the results indicate that BlockBot successfully establishes a bond as well as a common locus with participants who are prone to engage with the BlockBot. Participants keep the robotic artifact close to them and anthropomorphize it. The BlockBot evokes identity and mind attributions and is frequently taken on social activities outside of the home, even long trips. These observed engagements could suggest that the BlockBot has passed a social threshold whereby they consider the BlockBot- to a certain degree - a social actor. Given the fact that we minimized its realism in appearance and behavior, the results could suggest that its common locus was a relevant factor in passing the social threshold. However, we also identified and discussed several alternative explanations that might have influenced the observed engagements: a framing effect, a novelty effect, its appearance and object attachment.
The concept of common locus implies that the presence of robots and robotic artifacts in our daily lives will impact how we perceive these robots. It is important to consider this dynamic because these attributions can be harmless and even useful in cases where bonding is desirable. However, if the perception of a robot as a social agent and any possible social engagement is undesirable, the common locus implies that certain robots should be kept away and out of sight when not used. Merely removing anthropomorphic qualities is potentially not sufficient.
Two BlockBots are still out there couch surfing and operating as figurative ‘cultural probes’ into human-inhabited environments [
33]. We are excited to follow their journey, see what data they will generate, where they will go, what activities they will be part of and what attributions will be made along the way. Our in-the-field approach turned out to be an exciting methodology for studying human-robot interaction. We have received an interesting variety of messages, photos, videos, and feedback. The positive reactions to the BlockBot point to the strength of its design and its viability as a research tool for human-robot interaction. Since the BlockBot can be designed with different appearances or behaviors, future studies could use the BlockBot not only to study the relevance of the common locus but as a research platform in a wide variety of other human-robot relationship studies in human-inhabited environments.