2 Background
Clinical empathy is defined as “the ability to understand the patient's situation, perspectives and feelings, and to communicate that understanding to the patient” [
1, p. 221]. Research has demonstrated that patients have little difficulty in identifying the use of empathetic behaviors by healthcare professionals [
2,
3], and that these behaviors in turn are associated with a number of positive patient health outcomes. In diabetes research, for example, patients of physicians high in empathy were found to be significantly more likely to have effective control of their illness, compared to patients of physicians low in empathy [
4].
Clinical empathy has been shown to affect patients’ psychological health and/or treatment outcomes across a number of conditions. For example, physician empathy has been associated with increased patient satisfaction and psychological adjustment and decreased psychological distress in cancer care [
5]. Even in trauma centers, where the focus is on fast and effective disbursement of medical services, patients who perceived their physician as having higher empathy had better self-reported treatment outcomes [
6].
Physician empathy is a key factor in ensuring patient adherence to medical treatment [
7]. This may be because physician empathy enhances physician-patient agreement regarding treatment decisions made during medical consultations [
7,
8]. Physician empathy may also impact patient adherence through facilitation of patient coping, management, and understanding of illness [
9], as well as increasing patient satisfaction and trust. In fact, numerous studies have shown a positive association between the use of empathy by a physician, and higher patient satisfaction and trust [
5,
6,
9–
13].
Theoretically, it has been proposed that patients have higher trust in physicians who respond to their healthcare issues with the appropriate demonstration of understanding and concern [
14]. It is further theorized that empathy behaviors increase patient perceptions of physician care [
11] and physician-patient collaboration [
12]. The use of physician empathy is thought to increase patient satisfaction by enhancing perceptions of physician commitment and collaborative care, and by reducing patient frustration, disempowerment, and distress [
5].
Clinical empathy can be demonstrated through both verbal and non-verbal behaviors. An important aspect of clinical empathy is employing a feedback focus. Utilization of physician phrasing such as
“Let me see if I have this right…” and
“Can you tell me more about this…” allows opportunity for patient correction or addition of further information [
1, p. 221–222]. Employing a feedback focus also provides a patient with concrete evidence that their physician is actively listening to their medical concerns [
15].
Non-verbal behaviors can also be employed to express attention and understanding [
15,
16]. In physician-patient interactions, demonstration of behavioral empathy can include the use of eye contact, smiling, and the use of a forward lean [
17–
20]. Other non-verbal behaviors found to be associated with empathy include head nodding, active listening, facial mimicry, and tone of voice [
15,
19].
Head nodding is seen to portray agreement, acceptance, and acknowledgement in many cultures. Head nodding has been associated with empathy within the context of clinical interactions. For example, in video-taped interactions between medical students and physical patients, head nodding by medical students was significantly associated with increased ratings of empathy by observing clinicians [
18]. In a review of clinician non-verbal behaviors during interactions with patients, in which ‘clinicians’ included both medical physicians and psychotherapists, clinician head nodding was associated with higher patient ratings of clinician ‘empathetic qualities’ [
19].
Further support for these findings is demonstrated in research in which head nodding is purposefully avoided or absent. For example, Marci and Orr [
20] conducted research in which psychiatrists were instructed to deliberately suppress eye gaze and head nodding behaviors during clinician-patient interactions. The researchers found, that in comparison to clinicians who used these behaviors, physicians without these behaviors were rated significantly lower in terms of perceived empathy by patients. Psychophysiological concordance (measured through simultaneous skin conductance) was also significantly lower for patients in this group. Absence of head nodding, as well as other non-verbal behaviors such as smiling and eye contact, has also been found to be associated with lower patient satisfaction [
21], and both short and long term decreases in patient functioning [
22].
3 Related Work
The model of Robot-Patient Communication [
23] theorizes that demonstration of empathetic behaviors by healthcare robots will result in similar benefits for patients as demonstration of empathetic behaviors by human clinicians. Empathetic robot behaviors could include the demonstration of listening, appropriate reflection in regard to users’ emotional disclosures, and the demonstration of understanding through verbal and non-verbal communication.
Aspects of the Robot-Patient Communication model have been tested in human-robot interactions, showing that use of robot smiling, forward lean, and humor, can improve user perceptions of healthcare robots [
24–
26]. The specific investigation of empathetic statements and head nodding by a robot however, have had very limited attention within human-robot interactions in healthcare.
A handful of studies have shown that robot use of empathetic statements is associated with positive user outcomes in other social applications. For example, a robot that made empathetic statements about a lost bag, was rated higher in empathy, emotion, and overall behavior by users, than a robot that made neutral statements [
27]. A robotic cat that made empathetic statements and facial expressions towards its play-mate during a game of chess was rated more helpful, and higher in engagement and self-validation than a neutral robotic cat [
28]. In another study, the robotic cat was rated as more friendly by individuals who had received empathetic statements from the cat while engaged in a game of chess with another player, than when compared to players who received only neutral statements from the cat [
29]. A later study found that while robot use of empathetic statements are easily recognized by participants, affective facial expressions are often mismatched [
30].
There is some evidence that the effects of robot empathy behaviors may not be sustained over long periods of time. In a recent study, robot verbal empathy statements and non-verbal empathy behavior's (such as eye gaze), were associated with an increase in meaningful discussions during an initial session between a robot that taught about sustainability, and student users [
31]. However, when the experiment was repeated over the course of two months, robot empathy was not found to be associated with any significant long-term learning outcomes. More research however, is needed in order to understand the effect of robot empathy long-term.
In a healthcare vignette (a hypothetical situation described to participants in text form), a hypothetical robot that used patient centered speech received higher ratings of its emotional intelligence than a robot that used task centered speech, although this study did not measure perceived empathy (Chita-Tegmark et al., 2019). Examples of patient centered, and task-centered speech are: “It is currently difficult for the patient to observe the treatment plan” versus “The patient currently shows high levels of treatment non-compliance” respectively). Interestingly, participants were also found to rate the hypothetical patient significantly more favorably in conditions where the robot used patient-centered speech, as opposed to conditions in which the robot used task-centered speech. Contrary to the authors hypotheses, the robot in this study was not rated lower in terms of trust or acceptance, when using task-centered speech. Given the positive relationship between empathy and emotional intelligence (Ioannidou & Konstantikaki, 2008), it is possible that the robot in this study would also have been rated as more empathetic when using patient-centered speech, had this been measured.
To our knowledge only one study has investigated the effect of robot empathy in a healthcare based human-robot interaction [
32]. In this pilot study, 31 children who were to have an
intravenous (IV) line placed during a hospital stay were randomized to one of three groups: one in which there was a play specialist and no robot present, one in which there was a play specialist and a non-empathetic robot present, and a final condition in which there was a play specialist and empathetic robot during IV placement. In the empathy condition, the robot changed its verbal responses and facial affect based on the child's expressed level of fear and pain, and helped the child practice deep breathing and prepare for the procedure. Children in the empathy condition were significantly more likely to report that the robot had ‘feelings’ and that the procedure ‘hurt less’. Non-significant results in regards to observed pain and distress and parental satisfaction may be due to the small sample size.
Two studies have specifically examined the use of robot empathy on user ratings of robot trust or user satisfaction. First, a pilot study with a chimpanzee robot [
33] found that users who conversed with an empathic version were more satisfied than those who conversed with a neutral version of the robot (robot empathy was demonstrated through facial mimicry and head gestures). A second pilot study found a non-significant trend that participants rated a humanoid social robot (Pepper) that used verbal statements and large gestures when a participant was inattentive, higher in trustworthiness, compared to a robot that did not show these behaviors [
34].
Preliminary research on robot head nodding suggests beneficial effects. For example, human head nodding significantly increased during human-robot interactions when the robot was able to ‘understand’ human head nodding and nodded in return [
35]. Robot nodding may increase user perceptions of the robot's comprehension or increase user engagement. Robot head nodding in response to hearing instructions, such as making a cup of tea, has also been shown to increase user perceptions of robot engagement, comprehension, and likeability [
36]. However, no studies could be found investigating robot head nodding on user perceptions in a healthcare context.
3.1 Rationale
To our knowledge, only one study has specifically examined the effect of healthcare robot empathy via facial and verbal expressions, on the perceptions of human users in a healthcare based human-robot interaction [
32]. While this study offers preliminary evidence for the use of healthcare robot empathy in increasing children's perceptions of robot ‘feeling’ and decreasing children's perceptions of pain, further research is needed in adults and in other scenarios. In addition, it is necessary to understand how healthcare robot empathy affects trust and satisfaction. This is critical given the growing body of evidence demonstrating significant associations between physician use of empathy and increases in patient perceptions of physician trust and satisfaction [
5,
6,
9–
13]. When considering the study of trust in particular, researchers argue that distrust is not simply the opposite of trust, and as such, trust and distrust should be treated as a separate constructs [
37,
38]. Thus, both trust and distrust were examined in the current study.
Head nodding is a non-verbal behavior that has been found to be associated with patient satisfaction and perceptions of empathy [
18–
22] when used by clinicians. Given the absence of research investigating the effect of healthcare robot nodding on patient trust and satisfaction, research of this behavior may prove to be promising. As previous research has shown head nodding to be not only recognized, but also associated with physician empathy, when observed during a video-taped physician-patient interaction [
18], the use of robot head nodding in an experimental video study is warranted.
3.2 Aim
The aim of this study was to examine the effect of verbal and non-verbal empathetic behaviors by a healthcare robot, during a video-recorded interaction with a patient, on participant perceptions of robot empathy, trust, distrust, and satisfaction. Empathy was demonstrated by the healthcare robot through use of empathetic statements (verbal empathy) and head nodding (non-verbal empathy) behaviors.
3.3 Hypotheses
(1)
The use of empathetic statements by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which empathetic statements were not used.
(2)
The use of head-nodding by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which head-nodding was not used.
(3)
Use of empathetic statements and head-nodding by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which empathetic statements or head-nodding alone was used.
4 Method
4.1 Experimental Set-up and Materials
This study was originally designed as an experiment involving face-to-face, human-robot interactions, within the context of a healthcare scenario. Unfortunately, due to social distancing requirements introduced in the wake of the Covid-19 pandemic, this was not possible. At the time, there was no certainty given around time frames in which this original design might be able to be utilized. It was therefore decided that the current study would move forward, using an online video study design. The authors felt it important that research continue, despite uncertainty around the pandemic, and that an online design would still allow for valuable insight into user perceptions of a healthcare robot's use of empathy.
In line with current restrictions at the time of this work, a mixed between-within experimental video design was employed. The study was carried out online, using both the Amazon Mechanical Turk (AMT) and Qualtrics platforms. AMT is a public, crowd-sourcing website which connects people, researchers, and businesses with individuals willing to take part in research and other work tasks. Potential participants (registered with AMT) were notified of the current study by AMT via their online profile. The use of AMT participants, as opposed to a patient population, was chosen for this study due to the preliminary nature of this work and consideration of patient circumstances. Patient populations are generally unwell, and it is reasonable to first test perceptions of healthcare robot behaviors on healthy populations, in order to avoid adding unintentional stress to these individuals through exposure to robot behaviors that may be associated with negative user outcomes.
In order to be included in the current study, participants were required to be over 16 years old, fluent in English, and to be listed as a ‘Master Worker’ on AMT. A Master Worker is a registered ‘worker’ on AMT who has consistently demonstrated a high level of output across a variety of tasks. Participants who met eligibility criteria and chose to take part in the study were directed by AMT to a secure hyperlink, which connected them to Qualtrics. Qualtrics is an online platform enabling researchers to run online surveys. The current study used Qualtrics in order to gather informed consent, and as a platform through which to run the experimental sessions. Ethics approval for the current study was granted by the University of Auckland Human Participants Ethics Committee (Ref. 024431).
The Nao robot was chosen for the current study (see Figure
1). Nao is a programmable, humanoid robot, by Softbank (Japan). The Nao robot is able to ‘speak’ and can also perform a number of physical movements through utilization of its hands, arms, head, torso, and legs.
4.2 Procedure
Once informed consent was obtained, participants were directed to complete a baseline demographics questionnaire. This questionnaire asked about age, gender, ethnicity, occupation, and any previous experience interacting with robots. All participants were then instructed to view the first of two separate online videos. The initial video was approximately 2 minutes in duration and presented an interaction between a patient (‘Sam’) and robot nurse (‘Jane’). In this first interaction, the patient is seen asking the nurse robot for information regarding a general health check and what this entails. When the nurse robot offers to book a health check for the patient, the patient is seen agreeing to undertake the check with a doctor. After participants had viewed this initial video, they were asked to complete time-point one, post-interaction measures (see ‘Measures’ section for further details).
Following completion of time-point one measures, participants were randomized to view a second video. In this second video, the healthcare robot was seen to behave in one of the following ways:
(1)
The robot uses empathetic statements and head nodding during interaction with patient
(2)
The robot uses empathetic statements and no head nodding during interaction with patient
(3)
The robot uses no empathetic statements and head nodding during interaction with patient
(4)
The robot uses no empathetic statements and no head nodding during interaction with patient.
Randomization was performed using research randomizer.org and kept blinded to the researcher until the data analysis phase. The second video depicted a second interaction between the same patient (‘Sam’) and the same nurse robot (‘Jane’) (see Figure
2). In this second interaction video, the patient is seen to ask the nurse robot to take their blood pressure as part of the health check. The patient is then seen to discuss their symptoms and emotional state with the robot nurse, including the fact that they are feeling tired, having trouble sleeping, and “really need the Doctor to get the bottom of things”.
In the head-nodding conditions, the nurse robot is seen nodding to the patient as the patient discusses their symptoms (resulting in three separate head-nods during the interaction). In the verbal empathy conditions, the robot uses empathetic statements throughout the interaction, in response to the patients’ disclosures around symptomology and emotional state (e.g., “That sounds really hard. I can imagine that anyone in your situation would want to get some answers.”). In the non-empathy verbal condition, only neutral statements are made. Care was taken to ensure that robot statements in both conditions were similar in length.
Appendix A includes the ‘script’ for all four experimental conditions, inclusive of head nodding behaviors. The accompanying media files include all four of the experimental videos, as well as the initial interaction video. The decision to include an initial interaction video was based on previous research that found few significant differences between groups in regards to user perceptions of robot behavior [
24]. It was hypothesized that this lack of difference was potentially due to the vast majority of participants having never interacted with a robot before, resulting in a lack of comparison when completing self-report perception measures. Thus, the current study utilized an initial interaction video, in order to provide participants with a basis of comparison when completing self-report measures.
Once participants had viewed the second interaction video, they were asked to complete the time-point two measures. Measures completed following the first and second interactions (time-point one and time-point two) were identical. Upon completion of time-point two measures, participants were given an authorization code which could be entered through AMT in order to claim a US$4 ‘thank you’ payment. A procedural outline of the study is provided below (Figure
3).
7 Discussion
This study investigated the effect of a healthcare robot's use of empathetic statements and head nodding, on participant perceptions of robot empathy, trust, distrust, and satisfaction. Verbal empathy statements resulted in greater perceptions of the robot's empathy, trust, and satisfaction, and lower perceptions of distrust. Head nodding had no significant effects on empathy, trust, distrust, or satisfaction scores, and there were no significant interaction effects of verbal empathy and head nodding on any outcomes. It is possible that robot head nodding has no effects on user perceptions of robot empathy in healthcare, but more research is needed on this behavior. It is possible that nodding has stronger effects if it is towards the user themselves, if it is more frequent, or longer in duration. Neither empathetic statements nor head nodding had an effect on participants’ desire to interact with the robot face-to-face.
The findings of the current study provide support for the Robot-Patient Communication model [
23], which theorizes that robot communication behaviors can have effects on patient related health outcomes. This was only true for empathetic statements however, and not head-nodding. Findings also align with research demonstrating that physician empathy can increase patient satisfaction scores [
5,
10], and increase patient trust [
11–
13].
The results also align with previous research in human-robot interactions, in which the use of verbal robot empathy was found to increase participant scores in relation to helpfulness and engagement, both of which are measured as part of the satisfaction scale [
28]. They also align with results demonstrating the use of verbal robot empathy in increasing user perceptions of robot trustworthiness [
34], and in increasing perceptions of a robot's ‘feelings’ (i.e., empathy) [
32].
The current study found that distrust scores were significantly higher following conditions in which the robot did not use verbal empathy, compared to conditions in which it did. To our knowledge, this is the first study that has examined the use of robot empathy in relation to participant distrust.
8 Limitations
This research has several limitations. First, the study had an online experimental design utilizing video clips. Participants would likely have perceived that the interaction between the patient and robot was staged for the purposes of the study. Therefore, results may have differed had participants seen a natural interaction between a patient and robot. Second, viewing a human-robot interaction is different compared to engaging in a human-robot interaction. Thus, results may have differed had participants been able to interact with the robot face-to-face.
Third, participants were recruited online through AMT. Research has shown that many individuals undertaking tasks on AMT do so as a source of income, with most completing 20 to 100 tasks per week [
47]. It may be therefore, that individuals registered as workers on AMT are more experienced in undertaking research studies, and approach surveys in a way that may differ from those in the general population. Finally, AMT workers registered in the United States of America (USA) have been found to have a higher level of education than those in the general USA population [
47]. The majority of the participants were from the USA (
N = 90, 90%). It may be that individuals higher in education are more open and positive towards interactions between robots and patients. This may limit the generalizability of results.