Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Prescribed Time Interception of Moving Objects’ Trajectories Using Robot Manipulators
Previous Article in Journal
Virtual Hand Deformation-Based Pseudo-Haptic Feedback for Enhanced Force Perception and Task Performance in Physically Constrained Teleoperation
Previous Article in Special Issue
Temporal Progression of Four Older Adults through Technology Acceptance Phases for a Mobile Telepresence Robot in Domestic Environments
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Social Identity and Personality Traits in Human–Robot Interactions

by
Mariacarla Staffa
1,*,†,
Lorenzo D’Errico
2,† and
Antonio Maratea
1,†
1
Department of Science and Technologies, University of Naples Parthenope, 80143 Napoli, Italy
2
Department of Electrical Engineering and Information Technologies (DIETI), University of Naples Federico II, 80138 Napoli, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2024, 13(10), 144; https://doi.org/10.3390/robotics13100144
Submission received: 1 August 2024 / Revised: 18 September 2024 / Accepted: 24 September 2024 / Published: 27 September 2024
(This article belongs to the Special Issue Social Robots for the Human Well-Being)

Abstract

:
This study explores the role of social identity in human–robot interactions, focusing on a scenario where a humanoid robot functions as a bartender with either a positive or negative personality. Conducted with 28 participants, the experiment utilized the Big-5 questionnaire to assess personality traits and the Godspeed questionnaire to gauge perceptions of the robot. The research sought to determine if users could perceive the robot’s distinct identities and if these perceptions were influenced by the participants’ personality traits. The findings indicated that participants could effectively discern the robot’s different personalities, validating the potential for programming robots to convey specific social identities. Despite the limited sample size, the results suggest that participants’ initial emotional states and personality traits significantly influenced their perceptions, suggesting that customizing a robot’s identity to match the interlocutor’s personality can enhance the interaction experience. As a preliminary investigation, this study contributes valuable insights into human–robot interaction dynamics and lays the groundwork for future research in the development of socially integrated robotic systems.

1. Introduction and Related Works

Human-to-human interactions have long served as inspiration for the design and behavior of robots meant for human interactions. This method is especially relevant when discussing the social aspects of interaction [1]. In recent years, several academics have focused on developing more human-like features in robots, such as empathy, mutual understanding, and theory of mind [2,3,4]. These efforts seek to develop robots that are not only credible, reliable, and effective in social and assistive contexts but also improve the interaction experience without causing harm to users [5].
Personalizing robot behavior to adapt to human interlocutors has been a prevailing strategy in human–robot interactions (HRIs) [6]. Breazeal’s seminal work on social robots emphasizes the importance of affective and cognitive skills in robots to facilitate meaningful interactions with humans [7]. Similarly, the concept of adaptive robotic behavior is explored by Tapus et al., who discuss how personalized interaction strategies can improve user engagement and task performance in assistive robotics [8]. While personalization is crucial, this study advocates for a more humanized approach—endowing robots with distinct human characteristics and social identities. Instead of striving for a fully adaptive robot, we advocate for a more humanized robot—one endowed with distinct human characteristics, including a social identity. Social identity in robots is useful because it enhances human–machine interactions, making robots more acceptable and understandable within the social context they operate in. Robots that exhibit a socially identifiable persona may be viewed as more trustworthy and empathetic, which promotes more organic interactions and shortens the gap between people and machines [9,10]. Moreover, a well-defined social identity can help robots integrate better into human social groups by adapting their behaviors and responses to the group’s norms and expectations.
This approach is informed by theories from social psychology, such as the Social Identity Theory (SIT) proposed by Tajfel and Turner [11], which suggests that individuals categorize themselves and others into groups, forming social identities that influence their interactions. In addition to that, we refer to the concept of social identity, which defines it as the way in which people see themselves as individuals and includes the attributes of the individual, such as skills, talent, social abilities, and personality. Namely, social identity presents itself in three ways: physical (e.g., facial and body features), social (e.g., age, profession, and culture), and psychological (personality, the constant style of our behavior, etc.).
In this study, we aim to observe if the psychological aspect of the social identity, modeled as a change in personality, can affect the perception of the identity of a robot (talent, skills, and other social abilities) by users. Positive identity, for example, can be defined as having positive feelings and personality that help to have good psychological health and good social functioning [12]. Despite the wealth of research on personalization, there is a limited exploration of social identity in HRIs. Studies by Fiore et al. [13] have shown that robots exhibiting social behaviors, such as displaying emotions and social cues, can enhance user perception and acceptance. However, there is a gap in understanding how robots with distinct social identities impact interaction dynamics. Research on human–human interactions provides insights into the potential benefits of integrating social identity into HRIs. For instance, in [14], the authors discuss how social categorization influences interpersonal behavior and perception. In [15], the authors applied the Computers are Social Actors (CASA) paradigm to extend the predictions of Social Identity Theory (SIT) to human–robot interactions (HRIs) within the context of instructional communication. Using the CASA framework, researchers have shown that people attribute human characteristics to computers, assigning them traits such as dominance, submissiveness, gender [16], expertise, and others. This includes not only physical attributes but also psychological and cognitive abilities, highlighting the human inclination to anthropomorphize technology and view computers as social entities with distinct personalities [17].
Similarly, in [18], the authors demonstrated that age identity plays a significant role in shaping social identity, further emphasizing how identity factors influence human interactions. In [19], the authors extended SIT to educational settings, illustrating how age identity can affect relationships and dynamics within the college classroom, providing valuable insights into instructional communication.
Building on these insights, this study aims to investigate whether personality identity, similar to age identity, plays a crucial role in shaping social identity. By focusing on the concept of social identity, we aim to explore its impact on human–robot interactions. We hypothesize that robots with well-defined social identities can foster better communication and connection with users, ultimately enhancing the effectiveness and satisfaction of interactions and engagement in long-term interactions [20]. This study seeks to demonstrate the benefits of this humanized approach, contributing to the ongoing evolution of social and assistive robotics. We hypothesize that endowing a robot with a specific social identity, whether similar to or different from that of its human counterpart, will lead to improved understanding and acceptance of the robot [21]. Our hypothesis draws from extensive research on social interaction, which indicates that individuals often interpret the complexity and diversity of their social environment by categorizing themselves and others based on group memberships. This categorization process, where group affiliations become internalized and integral to one’s self-concept, forms what Tajfel and Turner (1986) referred to as “social identity” [11]. The exploration of robot identity is a developing field, but it is essential for improving the effectiveness of robots in social contexts. This concept is pivotal in understanding how individuals navigate social interactions, as it shapes their sense of self and influences their behavior within various social contexts. While there is substantial interest in the broader aspects of human–robot social interactions, research specifically focusing on social identity in these interactions remains sparse. The current study aims to bridge this gap by integrating insights from human–human social interaction research into the realm of human–robot interactions. Experimental setups in HRIs often involve scenarios that mimic real-world interactions to evaluate robot behavior and user responses. For example, a study by Kidd and Breazeal [7] explores how the nonverbal communication of the robots affects user engagement in a storytelling task. Our experimental design, involving a humanoid robot bartender with contrasting social identities, builds on such methodologies to investigate the role of social identity in HRI. Analyzing HRI data requires robust statistical methods to uncover underlying patterns and dependencies. Robins et al. highlight the use of factor analysis and regression techniques in evaluating user responses to robotic behavior [22]. Our study uses factor analysis, but without regression, to assess how the perceptions of a robot among participants are influenced by its social identity and interaction style. The integration of social identity into HRIs represents a novel approach that extends beyond traditional personalization strategies. By drawing on social psychology theories and employing rigorous experimental methods, this study aims to demonstrate that robots with distinct, relatable social identities can enhance communication, understanding, and acceptance among human users. This research contributes to the ongoing evolution of social and assistive robotics, paving the way for more human-like and socially integrated robotic systems.
To align with our research goals, we developed a human–robot interaction scenario set in a bar, a social environment where social dynamics are key to fostering understanding between participants. To be more precise, we built on the experimental framework of our earlier work [10] by conducting a more thorough examination of the relationship between personality traits and Godspeed products. We also used factor analysis to investigate additional factors that might have an impact on the outcomes. Another noteworthy distinction in this investigation is the modification of the robot’s behavior model according to previous discoveries. This made it possible for us to more closely match the robot’s actions to the particular personality attributes we wanted to apply. In the proposed experiment, a humanoid robot was programmed to simulate a bartender with one of two contrasting social identities, modeled in terms of personality: (1) a positive identity, exemplified by a cheerful and friendly bartender, and (2) a negative identity, exemplified by a grumpy and disinterested bartender. We aimed to understand how participants perceived the robot in terms of anthropomorphism, cognitive and behavioral skills, and how these perceptions related to their personality traits. This investigation was divided into two main research questions:
  • RQ1: Can the different identities of the robot be effectively perceived by users? This question explores whether we can program and customize the behavior of the robots to convey a distinct identity and sense of self to the interlocutor, thereby enhancing mutual understanding and the ability of humans to predict the reactions of the robots.
  • RQ2: Is there a relationship between the identities of the robots and the personality traits of the participants? This question investigates whether similarities in character traits improve the interaction from the perception standpoint, suggesting how to program the identity of the robots to match the specific personality of its interlocutor.
To address these questions, we recruited 28 participants to interact with the robot in its two different identities. Data were collected using standard questionnaires: the Big-5 questionnaire was used to assess the personalities of the participants and the Godspeed questionnaire was used to evaluate their perceptions of the robot. Statistical analyses, including paired t-tests and Pearson correlation coefficients, were then performed to identify any differences or common patterns in the perceptions of the different identities. Additionally, given the possibility that factors beyond the identities of the robots could influence perception, we also conducted a factor analysis to explore dependencies among the variables under study. By examining the role of social identity in both human–human and human–robot interactions, we seek to uncover how these identities influence the perception and understanding of robots. This approach not only complements existing studies but also extends the application of social identity theory to the field of robotics, paving the way for more human-like and socially integrated robotic systems.

2. Materials

The data in this study came from students who were voluntarily involved in multiple interactions with a virtual robot and were asked to fill out some specific questionnaires. These data were analyzed through statistical tools to highlight correlations and latent factors that could explain them.

2.1. Participants

This study was based on voluntary participation, mostly among Computer Science students from the University of Naples “Parthenope”. A total of 60 people were asked to participate; among them, 20 computer science students and 8 casual people agreed. The others refused expressing privacy concerns. The average age of the participants was 25 years, but two of them were above 50 years old. There was no balance between male and female participants, with males being the majority. The percentage of women in this study (7/28 = 25%) is coherent with the Global Gender Gap Report (2023) [23] where women represent 29.2% of the STEM workforce in 146 nations evaluated.
All the actual participants successfully completed all the required tests: their data were anonymized and shuffled to prevent any possible identification.

2.2. Informed Consent

All individuals involved in this study were required to provide informed consent to foster transparency, ethical research practices, and the well-being of the participants throughout the study. Informed consent gives a comprehensive overview of the research project, its objectives, and the role that the participants would play.
The specific details included the following:
  • Purpose of data collection: Participants received detailed information about the purpose of data collection, encompassing the research themes, specific inquiries, and overall study objectives.
  • Use of data: Participants were informed about the intended use of the collected data, emphasizing data anonymization to protect their identities and subsequent statistical analysis.
  • Confidentiality and anonymity: Assurances were given to participants regarding the confidentiality and anonymity of their information. It was stressed that identifying data would be removed, replaced, or remain undisclosed.
  • Voluntary participation: Participants were explicitly informed that their involvement in the study was voluntary and that they retained the right to withdraw at any point without facing any adverse consequences.
  • Researcher contacts: Informed consent provided contact information, allowing participants to reach out for any further inquiries or clarifications.

2.3. The Big Five Inventory Test

The questionnaire known as the Big Five Inventory (BFI) test serves as a tool for assessing personality traits, and its development spans from the 1980s onward. It has delineated five factors, commonly referred to by the English-speaking population in the United States:
  • Openness: reflecting receptiveness to new experiences, indicating traits such as inventiveness and curiosity, or, conversely, highlighting a tendency towards caution;
  • Conscientiousness: gauging the degree of efficiency and organization in a person or, alternatively, capturing characteristics of eccentricity, distraction, and disorganization;
  • Extraversion: assessing whether a person is outgoing and energetic or, on the contrary, more solitary and reserved;
  • Agreeableness: measuring the prevalence of a friendly and compassionate nature or, alternatively, emphasizing a more critical and rational aspect;
  • Neuroticism: associated with an individual’s sensitivity and tense disposition or, alternatively, with someone who is self-assured and highly resilient.
Certain words used to describe aspects of personality are often associated with certain traits and applied to the same person. For example, someone described as conscientious is more likely to be described as “always prepared” rather than “disorganized”. These associations have suggested the five broad dimensions commonly used in everyday language to describe personality, temperament, and the human psyche [24]. These labels for the five factors can be remembered using the acronyms “OCEAN” or “CANOE”. Under each proposed overarching factor, there is a series of more specific primary factors. For example, extraversion is typically associated with qualities such as sociability, assertiveness, excitement-seeking, warmth, activity, and positive emotions [25]. BFI uses a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree); scale scores are computed as the participant’s mean item response (i.e., adding all items scored on a scale and dividing by the number of items on the scale). The reference value adopted by the BFI is Cronbach’s alpha, which is 0.81 for Energy, 0.73 for Agreeableness, 0.81 for Conscientiousness, 0.90 for Emotional Stability, and 0.75 for Openness.

2.4. The Godspeed Test

The questionnaire series known as The Godspeed Questionnaire Series (GQS) stands as one of the most frequently referenced and widely employed questionnaires within the domains of human–robot interactions and human–agent interactions. Since its inception in 2009, this questionnaire series has achieved global reach, having been translated into 19 languages. The development of the GQS was driven by the imperative for standardized measurement tools in the realm of human–robot interactions (HRIs). This questionnaire series facilitates the examination of five crucial concepts in HRIs: anthropomorphism, animacy, likability, perceived intelligence, and perceived safety. The foundation of the GQS is semantic differentials. Answers are to be recorded on a five-point rating system using contrasting phrases, like “unconscious” and “conscious”, as anchors [26]. If every positive term is on the right and gets the highest score, the work becomes easier. To get participants to pay closer attention, researchers may choose to flip the terms’ order. However, doing so requires extra caution when reversing the scores for those questions in the data analysis. There are five ideas in the GQS, and each has three to six questions. Each idea’s average score can be used to represent that specific concept.
  • Module I: anthropomorphism measures the extent to which the robot is perceived as human-like. It includes items that assess whether the robot is seen as having human characteristics, such as being conscious, lifelike, or elegant in movement. This dimension could help determine if the robot’s personality (positive or negative) influences how human-like the participants perceive the robot to be;
  • Module II: likability assesses the emotional response participants have toward the robot, including whether the robot is perceived as friendly, pleasant, and nice. This dimension is crucial in our experiment to understand if the robot’s personality traits (positive or negative) impact how much the participants like or feel positively toward the robot;
  • Module III: perceived intelligence evaluates how intelligent and competent the robot is perceived to be. It includes items like knowledgeable, sensible, and responsible. This could reveal whether a positive or negative personality affects participants’ perceptions of the robot’s intelligence and capability;
  • Module IV: perceived safety measures the participants’ comfort levels and sense of safety when interacting with the robot. It includes items assessing whether the robot seems safe, calm, and not intimidating. This dimension is particularly important in our study to determine if the robot’s personality influences how secure or anxious participants feel during the interaction.

2.5. Factor Analysis

Assuming that a phenomenon is influenced by some latent factors that cannot be directly observed, factor analysis (FA) is a quantitative method that allows estimating these factors starting from some observed variables [27]. A latent factor is assumed to be a linear combination of the observed variables, but unlike Principal Component Analysis (PCA), a reconstruction error term is included in the model. FA can be used for dimensionality reduction, as a step in causal inference or to ease the interpretability of results in complex scenarios.
Exploratory Factor Analysis (EFA) can be used to investigate the underlying structure of a dataset to identify the latent factors that account for observed correlations among a set of measured variables. This is achieved through a series of steps: Initially, factors are extracted from the correlation matrix using methods such as Principal Axis Factoring or Maximum Likelihood. Subsequently, factor rotation techniques—such as Varimax or Promax—are applied to enhance the interpretability of the factor structure by achieving a simpler and more meaningful configuration. The interpretation of factors is based on the loadings of variables, which indicate the strength of the relationship between each variable and the latent factors.
The insights gained from EFA lay the groundwork for subsequent Confirmatory Factor Analysis (CFA), which can further validate the factor structure identified during the exploratory phase. When conducting an EFA, various arbitrary decisions must be made, guided by theory, practice, or inferential tests. In the case of CFA, a null hypothesis and an alternative hypothesis can be formulated. In simplistic terms, the null hypothesis assumes that the model is true, leading to an estimate of the discrepancy between reality and the results of the analysis. The better the estimated model aligns with reality, the more explanatory power the model has [28].

3. Method

We used a dual-conditional (positive identity vs. negative identity) within-participant experiment design. Two conditions were presented to all participants. Our goal was to explore if a robot’s identity can be modeled based on its behavior and how humans perceive it in connection to their personality. The experiment consisted of two interactions with the virtual robot, Furhat, who played the role of a bartender. We chose a bar scenario, since this was particularly suitable for social interactions. Candidates were asked to interact as customers, engaging in a conversation with the robot (whenever it asked something) and placing drink orders (choices were also displayed on the screen and could be selected using dedicated buttons). The robot assumed a different social identity modeled in terms of its personality (positive or negative) for each experiment (Figure 1), which were modeled by changing both facial expressions and dialog registers. At the start of our application, a random bartender was assigned, which could have been:
  • Positive identity: a positive identity with a playful demeanor.
  • Negative identity: a negative identity with gruff manners.
In Figure 2, a timeline of the experiment for each student is shown.
The within-subject design was chosen because it allowed controlling exogenous (demographic, social, and cultural) variables and required fewer participants to detect an effect of the same size. Being a dual-conditional study, randomization of the order is equivalent to counterbalancing: there are only two possible orderings, and roughly 50% of the subjects will be exposed to each one.
For the choice of facial emotions, we were inspired by the studies of Paul Ekman, author of the so-called “neurocultural theory” of emotions [29]. In his theory, the author argues that there are facial expressions deriving from emotions experienced in certain situations, characterized by universal mimics. According to the neurocultural theory, in addition to the universality of emotional expressions, there are display rules, or social rules of expression of emotions, culturally learned, which determine the control and modification of emotional expressions depending on the social circumstance. Based on these theories, we associated particular facial expressions and different dialogue registers to mimic a positive or negative social identity. Moreover, in [30], the authors investigate how auditory/verbal features like voice pitch and language cues (e.g., empathy and humor) affect interactions with social robot receptionists. They designed two robots: Olivia, who is extroverted, humorous, and has a high voice pitch, and Cynthia, who is introverted, serious, and has a low voice pitch. The study found that voice pitch significantly influenced users’ ratings of interaction quality, robot appeal, and enjoyment. Humor enhanced users’ perceptions of task enjoyment, robot personality, and speaking style, while empathy improved evaluations of the robot’s receptiveness and ease of interaction, demonstrating how the design of conversations could affect the perception of a robot and, thus, the result of the interaction.
Utilizing a laptop as the primary medium, participants were invited to peruse an information sheet—placed upstream of both questionnaires—and subsequently undertake the Big Five test to gauge their emotional intelligence. Following this preliminary phase, the interaction with the Furhat robot started: a random selection between the two implementations initiated a dialogue between the user and the virtual bartender. Within this interactive scenario, participants, after a brief exchange, were granted the opportunity to place orders for beverages, whether alcoholic or non-alcoholic, either through designated buttons or explicit verbal requests. Participants retained the option to ask for a comprehensive list of available drinks. In the case of alcoholic beverage selections, Furhat conscientiously inquired about the age of the client, ensuring legal adherence. Subsequent to the consummation of their chosen drinks, participants were at liberty to place additional orders or conclude their interaction. After the first interaction, participants were prompted with the Godspeed test, a structured questionnaire designed to capture and evaluate various facets of their experience with the robot.
The following step was a second interaction with the Furhat bartender routine, which, this time, was not randomly chosen but corresponded to the one that had not been selected during the first interaction with the Virtual Social Robot. The aim was to explore how different routines might influence the perceptions and responses of clients. After this second interaction, participants were once again requested to fill out the Godspeed questionnaire for a dual purpose: first, to capture their impressions; second, to enable a direct comparison with their responses from the initial interaction. Any discernible shifts, preferences, or disparities in participant experiences between the two interactions should be captured.

3.1. Furhat Programming

As mentioned earlier, the goal was to create a bartender with both grumpy and gentle behaviors, namely a negative identity and a positive one. The implemented behaviors were built using Kotlin, a simple programming language with a huge plethora of libraries for face customization or the implementation of facial expressions used as the domain-specific language for the robot Furhat.

3.1.1. Robot with Negative Identity

In Table 1, we describe the behavior of the robot with a negative identity (see also Figure 3). Unlike the positive one, if we did not reply to its questions more than two times, it would refuse to communicate further with the user and terminate the conversation (“leaving” the area). If we ordered an alcoholic drink, the robot would ask if we were underage (as can be seen in Table 2). This question was asked only once per client. After responding, we returned to the interaction module.

3.1.2. Robot with Positive Identity

When users encounter the positive one (Figure 4), the interaction starts with a greeting (Table 3). The interaction is described based on user’s responses (Table 4). Once we reply to the question, there are two possibilities: if the answer is “yes”, we move on to the joking module (Table 5); otherwise, we proceed to the interaction module. In the interaction module, we enter a loop: as long as we want to order something, the routine will loop inside this module; when we no longer wish to order, our interaction ends. If, however, we do not communicate with the positive one, it will ask if we need more time to decide; this will invoke again the joking module. Like the negative identity, the positive identity also has an age module, where the legal drinking age is confirmed: it is the same module seen previously for the negative identity.

4. Results

In this section, first of all, the Godspeed results are analyzed and discussed; then, the following questions are addressed:
  • Were there any significant differences between the average scores of the Godspeed test obtained from the two robot personalities? For this, a paired t-test was used (if there were no significant differences, the analysis would stop here).
  • Were there any correlations between the personality traits of the users and the Godspeed variables? For this, a Pearson correlation coefficient was used. This is a paired-variable analysis.
  • Were there any lurking variables explaining multiple correlations? Exploratory Factor Analysis (EFA) was chosen to find some lurking variables, called factors, that were able to explain the correlation among multiple observed variables and group them. The loadings give the weight, that is, the influence, of each of the original variables on the factors and are essential to recognize their meaning.
  • Were there any differences in the lurking variables between the two robot personalities? Differences in the loadings on two separate EFAs are discussed.

4.1. Godspeed Analysis

In Table 6, the mean, median, and standard deviation of the GQS are presented for each sub-dimension of the questionnaire. The table shows the values according to the interaction with both the positive and negative personalities of the robot. In order to have a better understanding of the results, a first visual representation of the data through bar plots is presented in Figure 5, Figure 6, Figure 7 and Figure 8.
As it is possible to see in Figure 5, the bar plot shows participants’ perceptions of the robot’s human-like characteristics across various dimensions. The robot with the positive identity generally scores slightly higher in categories like “fake–natural” and “machine-like to human-like”, but both identities perform similarly in terms of “artificial–lifelike” and “moving rigidly–moving elegantly”. This suggests a moderate improvement in the robot’s perceived human likeness when it exhibits positive traits. In Figure 6, there is a substantial difference between the positive and negative robot identities in terms of likability. The positive robot scores significantly higher in all categories—“dislike–like”, “unfriendly–friendly”, “unkind–kind”, “unpleasant–pleasant”, and “awful–nice”—highlighting that a more positive and friendly robot identity drastically improves user likability and perception. The perception of the robot’s intelligence (Figure 7) is moderately affected by the robot’s identity. The positive robot scores higher in perceived intelligence, competence, and responsibility. Although the difference is not as pronounced as in likability, users still tend to associate positive identity with greater intelligence. Finally, in Figure 8, it is possible to see that the perception of safety varies somewhat between the robot identities. The positive identity makes participants feel more relaxed and calm, while the negative identity induces slightly more anxiety. This suggests that emotional aspects of the robot’s behavior, such as friendliness or grumpiness, can influence the participants’ comfort and perceived safety during interaction. Overall, the bar plots visually illustrate the significant effect of the robot’s social identity on participants’ perceptions, with a clear trend toward higher ratings for the positive identity in terms of likability, intelligence, and safety. However, anthropomorphism remains relatively stable across both identities, suggesting that human likeness may be less dependent on emotional tone and more on physical or behavioral cues.

4.2. Paired t-Test

First of all, the paired-sample t-test (sometimes called the dependent sample t-test or within-subjects t-test) was used to see if there were significant differences among the average scores of the Godspeed test for the two interactions: the one with the negative identity and the one with the positive identity. Scores were represented in a centered Likert scale [31].
Called S i j ( A ) the score of the ith participant at the jth question with the negative identity (A) and called S i j ( E ) the score of the ith participant at the jth question with the positive identity (E), the paired t-test aims to measure the significance of differences between S . j ( A ) and S . j ( E ) , where . represents averaging on all participants. The results of the test are presented in Table 7.

4.3. Pearson’s Correlation

Additionally, Pearson’s correlation coefficient was used to evaluate if there was any linear relationship between the average personality traits calculated and the perception of anthropomorphism, likability, and intelligence. Also, the values of correlations were tested for significance against the null hypothesis of R = 0 . For the positive identity, there appeared to be quite a strong correlation between extroversion/openness and competence and between openness and kindness; for the negative identity, there seemed to be a strong correlation between neuroticism and pleasantness/niceness. However, considering Bonferroni correction on the significance threshold for multiple tests ( α = 0.05 / 28 ), the null hypothesis cannot be rejected. More data are necessary to confirm these correlations. The correlations can be read in Table 8 and Table 9.

4.4. Factor Analysis

To add depth to the exploration, this study also incorporated factor analysis. This technique aimed at finding latent and meaningful variables that explain the difference between the two interactions, amalgamating the emotional traits of the participants into the analysis.
We performed an R-factor analysis; that is, we used the correlation matrix among variables, not among subjects (Q-factor analysis). Hence, the data are the observed variables, not the subjects.
First of all, the scores in the S i j ( A ) group and in the S i j ( E ) group were considered together, and factor analysis was performed, as can be seen in Figure 9.
Then, the Big-5 test scores were chained to the Godspeed data, S i ( j + k ) ( A ) and S i ( j + k ) ( E ) , where k is the number of parameters in the Big-5 test, and factor analysis was performed separately in the two groups to highlight connections between personality and perceptions (see Figure 10 and Figure 11). Finally, factor analysis was executed in the S i j ( A ) group and in the S i j ( E ) group, and differences in loadings are highlighted in Figure 12 and Table 10.
Examining the loading factors for each identified factor, that is, finding which parameters are more represented in each factor, it can be inferred that the first factor can be defined as the “likability index”, and it is capable of representing all parameters of the likability module (dislike–like, unfriendly–friendly, unkind–kind, unpleasant–pleasant, and awful–nice) and the majority of parameters of the perceived intelligence module (incompetent–competent, irresponsible–responsible, and foolish–sensible). Regarding the second factor, it completely mirrors the anthropomorphism module of the Godspeed test, representing four out of the five parameters of the said module (fake–natural, machine-like–human-like, artificial–lifelike, moving rigidly–moving elegantly). For this reason, it is termed the “credibility factor”. Finally, the third factor (the fourth from the left in the loading factor plot), namely the “discomfort index”, is labeled as such because it represents all parameters of the perceived safety module and precisely reflects the magnitude of the parameters: there is, in fact, a relationship between the parameters “calm–agitated”, “still–surprised”, and “anxious–relaxed”, such that a significantly positive impact of the first two observations results in a significantly negative impact of the third, as it should be.
What was just examined was definitively confirmed when the two subsequent tests were performed, isolating and studying solely the observations resulting from interaction with the positive identity first and the observations resulting from interaction with the negative identity afterward. These tests further highlight the distribution of the aforementioned factors.

5. Discussion and Conclusions

This study underscores the significance of incorporating distinct social identities into robots to enhance their effectiveness in social interactions. By programming a humanoid robot to assume contrasting personalities in a bar setting, we aimed to investigate how these identities are perceived by users and how they interact with users’ personality traits. The paired t-test, an effective method for comparing two sets of data, was chosen to discern differences in participant impressions between the two robots’s identities. Concurrently, the Pearson correlation coefficient was leveraged to explore potential relationships between participants’ average personality traits and their assessments of the robot behaviors. Intriguingly, the examination of potential dependencies between variables extended into the realm of latent factors through factor analysis. This aimed to uncover hidden patterns and relationships among the parameters of different Godspeed modules and participants’ personality traits. This not only validated certain hypotheses but also paved the way for a deeper understanding of the interplay between individual differences, emotional responses, and the subtleties of robotic behavior.
The heatmaps of the factor analysis compare the factor loadings of Godspeed dimensions against Big-5 personality traits for different emotional conditions: positive and negative identities. In the negative identity robot setting, traits such as naturalness and human likeness show strong associations with the competence factor, while friendliness aligns with the credibility factor. The positive identity robot setting shifts the importance to the likability index for traits related to naturalness and human likeness, and extraversion becomes prominent in the competence factor. The differences between heatmaps highlight significant perceptual shifts between the two states, particularly in traits like moving, naturalness, and competence. These findings suggest that social identity significantly influences the perception of robotic agents, providing valuable insights for their design and development.
Our findings revealed that participants could indeed discern the different identities of the robot, supporting our hypothesis that a robot’s behavior can effectively convey a sense of identity. This aligns with our first research question (RQ1) about the ability to program and customize the robot’s behaviors to enhance mutual understanding and predictability.
Moreover, the analysis indicated a relationship between the robot’s identities and participants’ personality traits, addressing our second research question (RQ2). Specifically, we found that participants’ initial emotional states and personality traits influenced their perceptions of the robot, suggesting that customizing a robot’s identity to match the interlocutor’s personality could improve the interaction experience.
The results highlight the importance of tone and interaction style in shaping participants’ impressions of the robot. Even when the robot displayed a negative attitude, its credibility significantly impacted how it was perceived. This finding underscores the potential of designing robots with human-like social identities to foster better communication and connection, thereby enhancing user satisfaction and engagement in long-term interactions.
By integrating insights from human–human social interaction research, this study contributes to the evolving field of social and assistive robotics. It provides a foundation for developing more human-like and socially integrated robots, ultimately advancing the understanding of human–robot interaction dynamics.
In light of the obtained results, future developments hold significant promise for advancing our understanding of the intricate dynamics between personality traits and human reactions during interactions with robots with identity in social interaction. Expanding this study to a more diverse user pool spanning various age groups would not only serve to validate the hypotheses established but also contribute to a broader comprehension of how different demographics respond to social robots. This exploration could uncover subtle variations in reactions, shedding light on the role age plays in shaping perceptions and emotional responses. Moreover, the application of similar experiments in different contexts, particularly those involving assistance to the elderly or aiding children with impairments, presents a compelling avenue for research. Understanding how individuals from these specific demographic groups interact with and adapt to social robots is crucial for tailoring robotic systems to meet the unique needs and sensitivities of these populations. Assessing the readiness for practical coexistence with social robots in such contexts could pave the way for the development of more effective and empathetic robotic companions, enhancing the overall quality of life for those who stand to benefit the most.
In essence, this research represents an initial effort to explore strategies for designing robots capable of adapting to the evolving identities of individuals and groups. While this study focused on controlling the robot’s visual design to isolate behavioral factors, future work will expand on this by exploring how variations in the robot’s appearance might further influence user perceptions. This will help in developing a more comprehensive understanding of the interplay between a robot’s physical design and its social identity. The ongoing investigation into methods to accommodate the changing needs and preferences of humans and develops approaches for the analysis and synthesis of robotic identities over time. This reflects the dynamic nature of robotic identity within social contexts, highlighting the importance of flexibility and responsiveness in human–robot interactions. Future research would greatly benefit from exploring a broader spectrum of identity and personality combinations, as well as investigating how user perceptions evolve over extended periods of interaction. It would also be advantageous to establish more concrete research plans that outline specific areas of focus and methodologies for future studies, ensuring a comprehensive and targeted approach to advancing this field.

Author Contributions

Conceptualization, M.S.; methodology, M.S. and A.M.; validation, M.S. and A.M.; formal analysis, M.S., L.D. and A.M.; resources, M.S.; writing—original draft preparation, M.S. and L.D.; writing—review and editing, M.S., L.D. and A.M.; supervision, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

The work was supported by “RESTART—Robot Enhanced Social abilities based on Theory of mind for Acceptance of Robot in assistive Treatments” (CUP: I53D23003780001), funded by the MIUR with D.D. no. 861 under the PNRR and by Next Generation EU. The authors thank the student Gaetano Pagliarulo, who contributed to the data collection.

Conflicts of Interest

M.S. is serving as a guest editor for the Special Issue to which this article is being submitted. This role has been disclosed, and appropriate measures have been taken to ensure that this did not influence the peer-review process.

References

  1. Nass, C.; Moon, Y. Machines and Mindlessness: Social responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  2. Gena, C.; Manini, F.; Lieto, A.; Lillo, A.; Vernero, F. Can empathy affect the attribution of mental states to robots? In Proceedings of the ICMI ’23: 25th International Conference on Multimodal Interaction, Paris, France, 9–13 October 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 94–103. [Google Scholar] [CrossRef]
  3. Leite, I.; Pereira, A.; Mascarenhas, S.; Martinho, C.; Prada, R.; Paiva, A. The influence of empathy in human–robot relations. Int. J. Hum.-Comput. Stud. 2013, 71, 250–260. [Google Scholar] [CrossRef]
  4. Rossi, S.; Conti, D.; Garramone, F.; Santangelo, G.; Staffa, M.; Varrasi, S.; Di Nuovo, A. The Role of Personality Factors and Empathy in the Acceptance and Performance of a Social Robot for Psychometric Evaluations. Robotics 2020, 9, 39. [Google Scholar] [CrossRef]
  5. Staffa, M.; Rossi, S. Enhancing Affective Robotics via Human Internal State Monitoring. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; pp. 884–890. [Google Scholar] [CrossRef]
  6. Benedictis, R.D.; Umbrico, A.; Fracasso, F.; Cortellessa, G.; Orlandini, A.; Cesta, A. A dichotomic approach to adaptive interaction for socially assistive robots. User Model. User Adapt. Interact. 2023, 33, 293–331. [Google Scholar] [CrossRef] [PubMed]
  7. Breazeal, C. Toward sociable robots. Robot. Auton. Syst. 2003, 42, 167–175. [Google Scholar] [CrossRef]
  8. Tapus, A.; Ţăpuş, C.; Matarić, M.J. User—Robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intell. Serv. Robot. 2008, 1, 169–183. [Google Scholar] [CrossRef]
  9. Miranda, L.; Castellano, G.; Winkle, K. A Case for Diverse Social Robot Identity Performance in Education. In Proceedings of the HRI ’24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; Grollman, D., Broadbent, E., Ju, W., Soh, H., Williams, T., Eds.; ACM: New York, NY, USA, 2024; pp. 28–35. [Google Scholar]
  10. Staffa, M.; D’Errico, L.; Francese, R. Emphasizing with a Robot with a Personality. In Artificial Intelligence in HCI; Degen, H., Ntoa, S., Eds.; Springer: Cham, Switzerland, 2024; pp. 283–294. [Google Scholar]
  11. Tajfel, H.; Turner, J.C. The Social Identity Theory of Intergroup Behavior. In Political Psychology: Key Readings, Key Readings in Social Psychology; Psychology Press: New York, NY, USA, 2004. [Google Scholar]
  12. Keyes, C.; Shmotkin, D.; Ryff, C. Optimizing Well-Being: The Empirical Encounter of Two Traditions. J. Personal. Soc. Psychol. 2002, 82, 1007–1022. [Google Scholar] [CrossRef]
  13. Fiore, S.; Wiltshire, T.; Lobato, E.; Jentsch, F.; Huang, W.; Axelrod, B. Toward understanding social cues and signals in human–robot interaction: Effects of robot gaze and proxemic behavior. Front. Psychol. 2013, 4, 859. [Google Scholar] [CrossRef] [PubMed]
  14. Bodenhausen, G.; Kang, S.; Peery, D. Social categorization and the perception of social groups. In The SAGE Handbook of Social Cognition; SAGE Publicationbs Inc.: Thousand Oaks, CA, USA, 2012; pp. 311–329. [Google Scholar] [CrossRef]
  15. Edwards, C.; Edwards, A.; Stoll, B.; Lin, X.; Massey, N. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Comput. Hum. Behav. 2019, 90, 357–362. [Google Scholar] [CrossRef]
  16. Lee, E.J.; Nass, C.; Brave, S. Can computer-generated speech have gender? An experimental test of gender stereotype. In Proceedings of the CHI ’00 Extended Abstracts on Human Factors in Computing Systems, The Hague, The Netherlands, 1–6 April 2000; Association for Computing Machinery: New York, NY, USA, 2000; pp. 289–290. [Google Scholar] [CrossRef]
  17. Nass, C.; Moon, Y.; Fogg, B.; Reeves, B.; Dryer, D. Can computer personalities be human personalities? Int. J. Hum.-Comput. Stud. 1995, 43, 223–239. [Google Scholar] [CrossRef]
  18. Harwood, J.; Giles, H.; Ryan, E.B. Aging, communication, and intergroup theory: Social identity and intergenerational communication. In Handbook of Communication and Aging Research; Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 1995. [Google Scholar]
  19. Harwood, J. Age identity and television viewing preferences. Commun. Rep. 1999, 12, 85–90. [Google Scholar] [CrossRef]
  20. Winkle, K.; Lemaignan, S.; Caleb-Solly, P.; Leonards, U.; Turton, A.J.; Bremner, P. Effective Persuasion Strategies for Socially Assistive Robots. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 277–285. [Google Scholar]
  21. Staffa, M.; Rossi, A.; Bucci, B.; Russo, D.; Rossi, S. Shall I Be Like You? Investigating Robot’s Personalities and Occupational Roles for Personalised HRI. In Social Robotics. ICSR 2021; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2021; Volume 13086, pp. 718–728. [Google Scholar] [CrossRef]
  22. Robins, B.; Dautenhahn, K.; Dubowski, J. Does appearance matter in the interaction of children with autism with a humanoid robot? Interact. Stud. 2006, 7, 479–512. [Google Scholar] [CrossRef]
  23. Available online: https://www3.weforum.org/docs/WEF_GGGR_2023.pdf (accessed on 23 September 2024).
  24. Goldberg, L.R. The structure of phenotypic personality traits. Am. Psychol. 1993, 48, 26–34. [Google Scholar] [CrossRef] [PubMed]
  25. Matthews, G.; Deary, I.J.; Whiteman, M.C. Personality Traits (PDF), 2nd ed.; Cambridge University Press: Cambridge, UK, 2014; ISBN 978-0-521-83107-9. [Google Scholar]
  26. Bartneck, C.; Croft, E.; Kulic, D. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  27. Sallis, J.E.; Gripsrud, G.; Olsson, U.H.; Silkoset, R. Factor Analysis. In Research Methods and Data Analysis for Business Decisions: A Primer Using SPSS; Springer International Publishing: Cham, Switzerland, 2021; pp. 223–243. [Google Scholar] [CrossRef]
  28. Rossi, G. Appunti di Analisi Fattoriale, Versione: 0.4.1.42; Università degli Studi di Milano-Bicocca, Dipartimento di Psicologia: Milano, Italy, 2018.
  29. Ekman, P. Emotion in the Human Face; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  30. Niculescu, A.; van Dijk, B.; Nijholt, A.; Li, H.; Lan, S.S. Making Social Robots More Attractive: The Effects of Voice Pitch, Humor and Empathy. Int. J. Soc. Robot. 2013, 5, 171–191. [Google Scholar] [CrossRef]
  31. Carifio, J.; Perla, R. Resolving the 50-year debate around using and misusing Likert scales. Med. Educ. 2008, 42, 1150–1152. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) The grumpy, negative identity bartender, and (b) the kind, positive identity bartender.
Figure 1. (a) The grumpy, negative identity bartender, and (b) the kind, positive identity bartender.
Robotics 13 00144 g001
Figure 2. Timeline of experimental procedure.
Figure 2. Timeline of experimental procedure.
Robotics 13 00144 g002
Figure 3. User interacting with the robot with a negative identity.
Figure 3. User interacting with the robot with a negative identity.
Robotics 13 00144 g003
Figure 4. User interacting with the robot with a positive identity.
Figure 4. User interacting with the robot with a positive identity.
Robotics 13 00144 g004
Figure 5. Histogram of anthropomorphism module.
Figure 5. Histogram of anthropomorphism module.
Robotics 13 00144 g005
Figure 6. Histogram of likability module.
Figure 6. Histogram of likability module.
Robotics 13 00144 g006
Figure 7. Histogram of perceived intelligence module.
Figure 7. Histogram of perceived intelligence module.
Robotics 13 00144 g007
Figure 8. Histogram of the perceived safety module.
Figure 8. Histogram of the perceived safety module.
Robotics 13 00144 g008
Figure 9. Factor loading Godspeed parameters.
Figure 9. Factor loading Godspeed parameters.
Robotics 13 00144 g009
Figure 10. Factor loading for the positive identity case.
Figure 10. Factor loading for the positive identity case.
Robotics 13 00144 g010
Figure 11. Factor loading for the negative identity case.
Figure 11. Factor loading for the negative identity case.
Robotics 13 00144 g011
Figure 12. Difference in factor loadings between the two cases.
Figure 12. Difference in factor loadings between the two cases.
Robotics 13 00144 g012
Table 1. Negative social identity robot’s interaction module.
Table 1. Negative social identity robot’s interaction module.
Interaction
CharactersDialogueModule/State
FurhatHey
UserHi/hello
FurhatWhat do you want?
UserWhat do you have?
Furhat(List some products)
UserAre these all the drinks?
Furhat(List some products)
UserI want some wine (alcoholic product)Age
FurhatAnything else?
UserI want some water (non-alcoholic product.)
FurhatAnything else?
User(Do not respond more than 2 times)END
UserNo
FurhatGoodbyeEND
Table 2. Bartender’s age module.
Table 2. Bartender’s age module.
Age
1-3 CharactersDialogueTrigger
1-3 FurhatAre you over 18?
1-3 UserYes/no/>18
1-3 FurhatI cannot serve alcohol to minorsNo
1-3 FurhatI hope you enjoy itYes/>18
Table 3. Greeting module.
Table 3. Greeting module.
Greeting
CharactersDialogueTrigger
FurhatHello
UserHi/hello
FurhatHow was your day so far?
User(You can say anything)
FurhatAre you in the mood for a joke?
UserYes/noJoke/interaction
Table 4. Positive social identity robot’s interaction module.
Table 4. Positive social identity robot’s interaction module.
Interaction
CharactersDialogueModule/State
UserNoGreeting/joking answer
FurhatWhat can I do for you?
UserWhat do you have?
Furhat(Lists some products)
UserAre these all the drinks?
Furhat(Lists some products)
UserI want some wine (alcoholic product)Age
FurhatAnything else?
UserI want some water (non-alcoholic product)
FurhatAnything else?
UserCan you tell me something funny?Joking
UserNo
User(No answer)Interaction/joking
FurhatGoodbyeEND
Table 5. Joking module.
Table 5. Joking module.
Joking
CharactersDialogueModule/State
UserYesGreeting/joking answer
Furhat(Say some jokes)
FurhatAnother?
UserYes/noJoker/interaction
Table 6. Mean, median, and standard deviation of the GQS for each subdimension of the questionnaire: ANTHRO—anthropomorphism, LIKABI—likability, INTELL—perceived intelligence, and SAFE—perceived safety.
Table 6. Mean, median, and standard deviation of the GQS for each subdimension of the questionnaire: ANTHRO—anthropomorphism, LIKABI—likability, INTELL—perceived intelligence, and SAFE—perceived safety.
  PositiveNegative
  MeanMedianStdvMeanMedianStdv
ANTHROFake–natural3.13.01.02.62.01.1
Machine-like–human-like2.62.00.82.52.01.0
Unconscious–conscious3.13.01.03.33.01.0
Artificial–lifelike2.52.01.02.52.01.0
Moving rigidly–moving elegantly2.42.01.02.32.00.9
LIKABIDislike–like3.94.01.02.92.51.3
Unfriendly–friendly4.24.00.62.12.01.4
Unkind–kind4.24.00.82.32.01.4
Unpleasant–pleasant4.14.00.82.63.01.3
Awful–nice4.14.00.92.83.01.3
INTELLIncompetent–competent3.74.01.02.83.01.2
Ignorant–knowledgeable3.63.50.93.33.00.9
Irresponsible–responsible4.14.00.93.53.51.3
Unintelligent–intelligent3.53.00.93.43.01.1
Foolish–sensible3.84.00.82.83.01.1
SAFEAnxious –relaxed4.04.00.93.23.51.3
Calm–agitated2.02.01.12.32.01.1
Still–surprised3.43.51.23.84.01.2
Table 7. p-values of the paired t-test comparing negative and positive scores. Significant values are highlighted in bold. Based on the Bonferroni correction, the adjusted alpha value is 0.002.
Table 7. p-values of the paired t-test comparing negative and positive scores. Significant values are highlighted in bold. Based on the Bonferroni correction, the adjusted alpha value is 0.002.
p-Value
Godspeed I: anthropomorphismFake–natural0.1193
Machine-like–human-like0.6823
Unconscious–conscious0.2436
Artificial–lifelike0.8820
Moving rigidly–moving elegantly0.7306
Godspeed II: likabilityDislike–like0.0017
Unfriendly–friendly<0.00001
Unkind–kind<0.00001
Unpleasant–pleasant0.0001
Awful–nice0.00002
Godspeed III: perceived intelligenceIncompetent–competent0.0044
Ignorant–knowledgeable0.0591
Irresponsible–responsible0.0115
Unintelligent–intelligent0.4428
Foolish–sensible0.0001
Table 8. Pearson’s Correlation values: positive identity Robot. Significant R values, with a threshold of 0.35, are highlighted in bold italics.
Table 8. Pearson’s Correlation values: positive identity Robot. Significant R values, with a threshold of 0.35, are highlighted in bold italics.
ExtraversionAgreeablenessConscientiousnessNeuroticismOpenness
Rp-ValueRp-ValueRp-ValueRp-ValueRp-Value
Godspeed I:
anthropomorphism
Unconscious/
conscious
−0.01980.95970.12540.52490.39490.03760.02220.91070.10310.6016
Godspeed II:
likability
Dislike/
like
−0.14620.47740.10730.58680.16740.39450.14450.46320.19710.3148
Unfriendly/
friendly
−0.16420.40430.12410.52960.05600.77720.08960.65030.13830.4828
Unkind/
Kind
0.15330.43610.27670.15400.11010.57700.15350.43550.47340.0110
Unpleasant/
pleasant
0.25700.18680.10680.5914−0.04340.82800.18050.35800.37610.0486
Awful/
nice
0.12580.52620.00620.97500.07850.69130.19220.32720.23570.2273
Godspeed III:
perceived intelligence
Incompetent/
competent
0.49630.00730.33410.08230.28550.1408-0.01720.93160.53020.0037
Ignorant/
knowledgeable
0.19070.33100.00530.97870.36260.05790.14870.45010.32240.0943
Irresponsible/
responsible
0.32120.0956−0.20230.30750.21820.26470.25160.19650.35740.0619
Unintelligent/
intelligent
0.07730.69580.11710.55290.22600.24750.17130.38340.44990.0163
Foolish/
sensible
0.19850.31250.21740.26650.09990.61620.15860.42020.20320.2997
Table 9. Pearson’s correlation values: negative identity robot. Significant R values with a threshold of 0.35 are highlighted in bold italics.
Table 9. Pearson’s correlation values: negative identity robot. Significant R values with a threshold of 0.35 are highlighted in bold italics.
ExtraversionAgreeablenessConscientiousnessNeuroticismOpenness
Rp-ValueRp-ValueRp-ValueRp-ValueRp-Value
Godspeed I:
anthropomorphism
Unconscious/
conscious
0.44110.01880.21420.27370.02670.89270.05530.78100.23940.2198
Godspeed II:
likability
Dislike/
like
−0.15140.44610.00110.9956−0.28560.14160.43730.0200−0.12180.5397
Unfriendly/
friendly
0.11810.54950.08670.6635−0.04190.83590.13500.4934−0.2290.2411
Unkind/
kind
0.18520.34600.10740.58790.12290.53330.11210.5701−0.20900.2859
Unpleasant/
pleasant
−0.19790.31500.08440.6694−0.06090.76170.47930.0099−0.25510.1903
Awful/
nice
−0.15350.44610.02200.91150.07570.70180.50330.0064−0.16700.4160
Godspeed III:
perceived intelligence
Incompetent/
competent
0.08520.6672−0.02510.9195−0.01160.95570.39330.0386−0.13170.5064
Ignorant/
knowledgeable
−0.07960.6895−0.16940.38990.20270.30090.28130.14700.04240.8304
Irresponsible/
responsible
−0.14810.4523−0.16860.41600.12770.51730.34250.0744−0.22660.2475
Unintelligent/
Intelligent
−0.0660.73860.05910.76510.13000.50970.29780.12380.10220.6048
Foolish/
sensible
0.00640.97420.21280.27700.25750.18590.32100.0958−0.04380.8280
Table 10. Most significant factors in difference factor loadings (* significant for values >= 0.60).
Table 10. Most significant factors in difference factor loadings (* significant for values >= 0.60).
LikabilityCredibilityCompetenceDiscomfort
Godspeed I:
anthropomorphism
Fake–natural0.530.31−0.970.06
Machine-like–human-like0.82 *0.17-1.090.20
Unconscious–conscious0.46−0.03−0.35−0.64
Artificial–lifelike0.65 *−0.08−0.390.15
Moving rigidly–moving elegantly0.94 *−0.83−0.320.25
Godspeed II:
likability
Dislike–like−0.410.62 *0.23−0.22
Unfriendly–friendly−0.380.070.28−0.14
Unkind–kind−0.390.140.22−0.15
Unpleasant–pleasant−0.960.61 *0.160.44
Awful–nice−0.900.69 *0.120.48
Godspeed III:
perceived intelligence
Incompetent–competent−0.650.510.16 *−0.35
Ignorant–knowledgeable−0.240.070.27−0.09
Irresponsible–responsible−0.740.66 *0.120.31
Unintelligent–intelligent−0.420.180.31−0.19
Foolish–sensible−0.18−0.010.29−0.26
Godspeed IV:
perceived security
Anxious–relaxed−0.300.780.180.92 *
Calm–agitated0.080.350.07−0.26
Still–surprised0.180.81 *−0.100.15
Big Five
personality traits
Extraversion−0.13−0.350.37−0.45
Agreeableness0.00−0.210.32−0.04
Conscientiousness0.250.070.270.30
Neuroticism−0.560.37−0.030.54
Openness−0.060.530.39−0.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Staffa, M.; D’Errico, L.; Maratea, A. Influence of Social Identity and Personality Traits in Human–Robot Interactions. Robotics 2024, 13, 144. https://doi.org/10.3390/robotics13100144

AMA Style

Staffa M, D’Errico L, Maratea A. Influence of Social Identity and Personality Traits in Human–Robot Interactions. Robotics. 2024; 13(10):144. https://doi.org/10.3390/robotics13100144

Chicago/Turabian Style

Staffa, Mariacarla, Lorenzo D’Errico, and Antonio Maratea. 2024. "Influence of Social Identity and Personality Traits in Human–Robot Interactions" Robotics 13, no. 10: 144. https://doi.org/10.3390/robotics13100144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop