Nothing Special   »   [go: up one dir, main page]

Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter Oldenbourg November 17, 2021

Towards Human-Centered AI: Psychological concepts as foundation for empirical XAI research

  • Katharina Weitz

    Katharina Weitz received an MSc in Psychology and an MSc in Computing in the Humanities (Applied Computer Science) at the University of Bamberg, Germany. At the Lab for Human-Centered AI at the University of Augsburg, she investigates the influence of explainability of AI systems on people’s trust and mental models. In addition to her research activities, she communicates scientific findings to the general public in books, lectures, workshops, and exhibitions. For her work, she was honored with a junior fellowship of the Gesellschaft für Informatik.

    EMAIL logo

Abstract

Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.

ACM CCS:

About the author

MSc Katharina Weitz

Katharina Weitz received an MSc in Psychology and an MSc in Computing in the Humanities (Applied Computer Science) at the University of Bamberg, Germany. At the Lab for Human-Centered AI at the University of Augsburg, she investigates the influence of explainability of AI systems on people’s trust and mental models. In addition to her research activities, she communicates scientific findings to the general public in books, lectures, workshops, and exhibitions. For her work, she was honored with a junior fellowship of the Gesellschaft für Informatik.

References

1. Amina Adadi and Mohammed Berrada. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6:52138–52160, 2018.10.1109/ACCESS.2018.2870052Search in Google Scholar

2. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.10.1371/journal.pone.0130140Search in Google Scholar PubMed PubMed Central

3. Albert Bandura. Self-efficacy. In The Corsini encyclopedia of psychology, pages 1–3, 2010.10.1002/9780470479216.corpsy0836Search in Google Scholar

4. Deborah R Compeau and Christopher A Higgins. Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, pages 189–211, 1995.10.2307/249688Search in Google Scholar

5. Maartje M A De Graaf and Bertram F Malle. How People Explain Action (and Autonomous Intelligent Systems Should Too). In AAAI 2017 Fall Symposium on AI-HRI, pages 19–26, 2017.Search in Google Scholar

6. Frank G Halasz and Thomas P Moran. Mental models and problem solving in using a calculator. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 212–216, 1983.10.1145/800045.801613Search in Google Scholar

7. Kasper Hald, Katharina Weitz, Matthias Rehm, and Elisabeth André. “an error occurred!” – trust repair with virtual robotusing levels of mistake explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction. ACM, 2021.10.1145/3472307.3484170Search in Google Scholar

8. Teena Hassan, Dominik Seuß, Johannes Wollenberg, Katharina Weitz, Miriam Kunz, Stefan Lautenbacher, Jens-Uwe Garbas, and Ute Schmid. Automatic detection of pain from facial expressions: a survey. IEEE transactions on pattern analysis and machine intelligence, 43(6):1815–1831, 2019.10.1109/TPAMI.2019.2958341Search in Google Scholar PubMed

9. Alexander Heimerl, Tobias Baur, Florian Lingenfelser, Johannes Wagner, and Elisabeth André. Nova – a tool for explainable cooperative machine learning. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 109–115, 2019.10.1109/ACII.2019.8925519Search in Google Scholar

10. Alexander Heimerl, Katharina Weitz, Tobias Baur, and Elisabeth André. Unraveling ml models of emotion with nova: Multi-level explainable ai for non-experts. IEEE Transactions on Affective Computing, pages 1, 2020.10.1109/TAFFC.2020.3043603Search in Google Scholar

11. Kevin Anthony Hoff and Masooda Bashir. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3):407–434, 2015.10.1177/0018720814547570Search in Google Scholar PubMed

12. Seung-A. Annie Jin. The effects of incorporating a virtual agent in a computer-aided test designed for stress management education: The mediating role of enjoyment. Computers in Human Behavior, 26(3):443–451, May 2010.10.1016/j.chb.2009.12.003Search in Google Scholar

13. Rita Latikka, Tuuli Turja, and Atte Oksanen. Self-efficacy and acceptance of robots. Computers in Human Behavior, 93:157–163, 2019.10.1016/j.chb.2018.12.017Search in Google Scholar

14. John D Lee and Katrina A See. Trust in automation: Designing for appropriate reliance. Human factors, 46(1):50–80, 2004.10.1518/hfes.46.1.50.30392Search in Google Scholar

15. Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1–38, 2019.10.1016/j.artint.2018.07.007Search in Google Scholar

16. Tim Miller, Piers Howe, and Liz Sonenberg. Explainable AI: Beware of Inmates Running the Asylum, 2017.Search in Google Scholar

17. Stefanos Nikolaidis, Minae Kwon, Jodi Forlizzi, and Siddhartha Srinivasa. Planning with verbal communication for human-robot collaboration. ACM Transactions on Human-Robot Interaction (THRI), 7(3):1–21, 2018.10.1145/3203305Search in Google Scholar

18. Don Norman. The design of everyday things: Revised and expanded edition. Basic books, 2013.Search in Google Scholar

19. Donald A Norman. Some observations on mental models. In Mental models, pages 15–22. Psychology Press, 2014.10.4324/9781315802725-5Search in Google Scholar

20. Johannes Rabold, Hannah Deininger, Michael Siebers, and Ute Schmid. Enriching visual with verbal explanations for relational concepts–combining lime with aleph, 2019.10.1007/978-3-030-43823-4_16Search in Google Scholar

21. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM, 2016.10.1145/2939672.2939778Search in Google Scholar

22. Ariella Richardson and Avi Rosenfeld. A survey of interpretability and explainability in human-agent systems. In Proceedings of the 2nd Workshop of Explainable Artificial Intelligence, pages 137–143, 2018.Search in Google Scholar

23. Mark O Riedl. Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1):33–36, 2019.10.1002/hbe2.117Search in Google Scholar

24. Heleen Rutjes, Martijn Willemsen, and Wijnand IJsselsteijn. Considerations on explainable ai and users’ mental models. In Where is the Human? Bridging the Gap Between AI and HCI United States, 5, 2019. Association for Computing Machinery, Inc.Search in Google Scholar

25. Dimitrios Varytimidis, Fernando Alonso-Fernandez, Boris Duran, and Cristofer Englund. Action and intention recognition of pedestrians in urban traffic. In 2018 14th International conference on signal-image technology & internet-based systems (SITIS), pages 676–682. IEEE, 2018.10.1109/SITIS.2018.00109Search in Google Scholar

26. Katharina Weitz, Teena Hassan, Ute Schmid, and Jens-Uwe Garbas. Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable ai methods. tm-Technisches Messen, 86(7-8):404–412, 2019.10.1515/teme-2019-0024Search in Google Scholar

27. Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. “let me explain!”: exploring the potential of virtual agents in explainable ai interaction design. Journal on Multimodal User Interfaces, 15(2):87–98, 2021.10.1007/s12193-020-00332-0Search in Google Scholar

28. Katharina Weitz, Ruben Schlagowski, and Elisabeth André. Demystifying artificial intelligence for end-users: Findings from a participatory machine learning show. In KI 2021: Advances in Artificial Intelligence, Stefan Edelkamp, Ralf Möller, and Elmar Rueckert, editors, pages 257–270. Springer International Publishing, Cham, 2021.10.1007/978-3-030-87626-5_19Search in Google Scholar

29. Joseph B Wiggins, Joseph F Grafsgaard, Kristy Elizabeth Boyer, Eric N Wiebe, and James C Lester. Do you think you can? the influence of student self-efficacy on the effectiveness of tutorial dialogue for computer science. International Journal of Artificial Intelligence in Education, 27(1):130–153, 2017.10.1007/s40593-015-0091-7Search in Google Scholar

30. Jianhua Zhang, Zhong Yin, Peng Chen, and Stefano Nichele. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Information Fusion, 59:103–126, 2020.10.1016/j.inffus.2020.01.011Search in Google Scholar

Received: 2021-10-08
Accepted: 2021-10-12
Published Online: 2021-11-17
Published in Print: 2022-04-26

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 11.12.2024 from https://www.degruyter.com/document/doi/10.1515/itit-2021-0047/html
Scroll to top button