Trust is a fundamental aspect that helps to foster effective collaboration between people and robots. It is imperative that people trust robots to not create a hazardous situation, such as starting a fire when trying to make a cup of tea or giving the wrong medicine to a vulnerable person. Likewise, people should be able to trust robots not to create an unsafe situation, such as leaving the door open unattended or providing personal information to strangers - and potentially to thieves. Trust, however, is a complex feeling and it can be affected by several factors that depend on the human, the robot and context of the interaction. Trust might hinder a robot’s assistance or lead to a loss of interest in robots after the novelty effect fades. Unreasonable over-trust in a robot’s capabilities could even have fatal consequences. It is therefore important to design and develop mechanisms to increase and mitigate people’s trust in service and assistive robots. A positive and balanced trust, indeed, is fundamental for building a high-quality interaction. Similarly, socially aware robots are perceived more positively by people in social contexts and situations. Social robotics systems, therefore, should integrate people’s direct and indirect modes of communication. Moreover, robots should be capable of self-adapting to satisfy people’s needs (i.e. personality, emotions, preferences, habits), and incorporating a reactive and predictive meta-cognition models to reason about the situational context (i.e. its own erroneous behaviours) and provide socially acceptable behaviours.

This special issue is composed by 24 manuscripts. The following collection of papers covers a wide range of topics of interests to identify some of the principal points to explore the role of trust in social robotics to effectively design and develop socially acceptable and trustable robots. The contributions include different aspects of people’s acceptance and trust of robots in different human-centred environments, such as educational, assistive, collaborative scenarios. Some works introduce new notions of acceptance and trust for autonomous artificial agents as tolerance, distrust and by considering interdisciplinary, such as sociology, psychology, and philosophy. Some papers focus on defying the factors affecting people’s trust in robots, such as society’s general attitudes, perceptions, and prejudices, expectations, cognitive and emotional effects. Other contributions, instead, investigate how to recover from a loss of trust, such as after different types of errors, or enhace trust in robots, such as by using personalisation of gaze, navigation, workload and interaction based on individuals’ characteristics (e.g., personality traits). Another group of works propose models for measuring and evaluating trust during a human-robot interaction. Finally, in this special issue a necessary focus has been provided by some authors to moral considerations, ethics, definition of existing and new policies, and the integration of robotics and AI in the EU’s policy plans.

1 Additional Information

This Special Issue is based on the conjunction of the workshops SCRITA (Trust, Acceptance and Social Cues in Human Robot Interaction) and TRAITS (The Road to a successful HRI: AI, Trust and ethicS) respectively organised at IEEE RO-MAN and ACM/IEEE HRI 2021 conferences. The main research focus of the SCRITA workshops is based on trust, acceptance and social cues in HRI, while TRAITS investigates the role and effects of trust, AI and ethics in HRI. The conjunction of these fundamental topics presents a range of research areas that guides researchers in building a successful human-robot interaction.