Abstract
The rising interest in socially assistive robotics is, at least in part, stemmed by the aging population around the world. A lot of research and interest has gone into insuring the safety of these robots. However, little has been done to consider the necessary role of emotion in these robots and the potential ethical implications of having affect-aware socially assistive robots. In this chapter we address some of the considerations that need to be taken into account in the research and development of robots assisting a vulnerable population. We use two fictional scenarios involving a robot assisting a person with Parkinson’s disease to discuss five ethical issues relevant to affect-aware socially assistive robots.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
This is not to say that there are not other critical issues pertaining to assistive robots, especially ones affect-aware robots. Designers need to consider the repercussions of a robot being able to capture and store the sort of data that is necessary for an affect-aware robot. This includes issues regarding invasiveness, privacy, and discomfort [17, 18]. Whether affect recognition technologies should be used to “fix” or augment human abilities is another concern [7]. By focusing on the social aspects of assistive robots we do not mean to ignore the challenges regarding the capture and storage of personal, affective data, but to focus on the underrepresented issues pertinent to social affect in the context of human-robot interaction.
- 2.
Whether or not these simulated emotions are “emotions” in the human sense is a discussion that is outside the scope of this paper.
References
Arkin, R.C., Ulam, P., Wagner, A.R.: Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc. IEEE 100(3), 571–589 (2012)
Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput. Human Interact. 12(2), 293–327 (2005)
Briggs, G.: Blame, what is it good for? In: Proceedings of the Workshop on Philosophical Perspectives on HRI at Ro-Man 2014 (2014)
Briggs, P., Scheutz, M., Tickle-Degnen, L.: Are robots ready for administering health status surveys? first results from an hri study with subjects with parkinson’s disease. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 327–334. ACM (2015)
Broekens, J., Heerink, M., Rosendal, H.: Assistive social robots in elderly care: a review. Gerontechnology 8(2), 94–103 (2009)
Corritore, C.L., Kracher, B., Wiedenbeck, S.: On-line trust: concepts, evolving themes, a model. Int. J. Human Comput. Stud. 58(6), 737–758 (2003)
el Kaliouby, R., Picard, R., Baron-Cohen, S.: Affective computing and autism. Ann. N. Y. Acad. Sci. 1093(1), 228–248 (2006)
Foot, P.: The problem of abortion and the doctrine of the double effect. Oxf. Rev. 5, 5–15 (1967)
Gratch, J., Marsella, S.: A domain-independent framework for modeling emotion. J. Cog. Syst. Res. 5(4), 269–306 (2004)
Haidt, J.: The moral emotions. Handb. Affect. Sci. 11, 852–870 (2003)
Heerink, M., Ben, K., Evers, V., Wielinga, B.: The influence of social presence on acceptance of a companion robot by older people. J. Phys. Agents 2(2), 33–40 (2008)
Malle, B.F., Guglielmo, S., Monroe, A.E.: Moral, cognitive, and social: the nature of blame. In: Forgas, J.P., Fiedler, K., Sedikides, C. (eds.) Social Thinking and Interpersonal Behavior, pp. 313–332. Psychology Press (2012)
Mao, W., Gratch, J.: Modeling social causality and responsibility judgment in multi-agent interactions. In: Proceedings of the Twenty-Third international joint conference on Artificial Intelligence, pp. 3166–3170. AAAI Press (2013)
Mikhail, J.: Universal moral grammar: theory, evidence and the future. Trends Cogn. Sci. 11(4), 143–152 (2007)
Muir, B.M., Moray, N.: Trust in automation. part ii. experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)
Pennebaker, J.W., Francis, M.E., Booth, R.J.: Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates 71, 2001 (2001)
Reynolds, C., Picard, R.: Ethical evaluation of displays that adapt to affect. Cyber Psychol. Behav. 7(6), 662–666 (2004)
Reynolds, C., Picard, R.W.: Evaluation of affective computing systems from a dimensional metaethical position. In: 1st Augmented Cognition Conference, in Conjunction with the 11th International Conference on Human-Computer Interaction, pp. 22–27 (2005)
Salem, M., Dautenhahn, K.: Evaluating trust and safety in hri: Practical issues and ethical challenges. In: Workshop on the Emerging Policy and Ethics of Human-Robot Interaction @ HRI 2015 (2015)
Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Towards safe and trustworthy social robots: ethical challenges and practical issues. In: International Conference on Social Robotics (2015)
Scheutz, M.: The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin, P., Bekey, G., Abney K. (eds.) Anthology on Robo-Ethics. MIT Press (2012)
Scheutz, M., Malle, B.F.: “Think and do the right thing”—a plea for morally competent autonomous robots. In: 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering, pp. 1–4. IEEE (2014)
Smith, C.A., Ellsworth, P.C.: Patterns of cognitive appraisal in emotion. J. Pers. Soc. Psychol. 48(4), 813–838 (1985)
Tapus, A., Tapus, C., Matarić, M.J.: The use of socially assistive robots in the design of intelligent cognitive therapies for people with dementia. In: Proceedings of IEEE International Conference on Rehabilitation Robotics, pp. 924–929. IEEE (2009)
Tickle-Degnen, L., Lyons, K.D.: Practitioners’ impressions of patients with parkinson’s disease: the social ecology of the expressive mask. Soc. Sci. Med. 58(3), 603–614 (2004)
Tickle-Degnen, L., Zebrowitz, L.A., Ma, H.I.: Culture, gender and health care stigma: practitioners’ response to facial masking experienced by people with parkinson’s disease. Soc. Sci. Med. 73(1), 95–102 (2011)
Tomai, E., Forbus, K.: Plenty of blame to go around: a qualitative approach to attribution of moral responsibility. In: Proceedings of Qualitative Reasoning Workshop (2007). http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA470434
United Nations, Department of Economic and Social Affairs, Population Division: World Population Ageing 2013. ST/ESA/SER.A/348 (2013)
Wilson, J.R.: Towards an affective robot capable of being a long-term companion. In: Sixth International Conference on Affective Computing and Intelligent Interaction, IEEE (2015)
Wilson, J.R., Scheutz, M.: A model of empathy to shape trolley problem moral judgements. In: The Sixth International Conference on Affective Computing and Intelligent Interaction. IEEE (2015)
World Health Organization: Global health and ageing (2011)
Acknowledgments
This work was in part supported by NSF grant #IIS- 1316809 and a grant from the Office of Naval Research, No. N00014-14-l-0144. The opinions expressed here are our own and do not necessarily reflect the views of NSF or ONR.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Wilson, J.R., Scheutz, M., Briggs, G. (2016). Reflections on the Design Challenges Prompted by Affect-Aware Socially Assistive Robots. In: Tkalčič, M., De Carolis, B., de Gemmis, M., Odić, A., Košir, A. (eds) Emotions and Personality in Personalized Services. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-31413-6_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-31413-6_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-31411-2
Online ISBN: 978-3-319-31413-6
eBook Packages: Computer ScienceComputer Science (R0)