Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCI

Published: 08 November 2024 Publication History

Abstract

AI-powered systems pose unknown challenges for designers, policymakers, and users, making it more difficult to assess potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from risk assessments and explanations in policy and design. To address this issue, we conducted three workshops with 18 participants and discussed the EU AI Act, which is the European proposal for a legal framework for AI regulation. Based on results of these workshops, we propose a user-centered conceptual model with five risk dimensions (Design and Development, Operational, Distributive, Individual, and Societal) that includes 17 key risks. We further identify six criteria for categorizing use cases. Our conceptual model (1) contributes to responsible design discourses by connecting the risk assessment theories with user-centered approaches, and (2) supports designers and policymakers in more strongly considering a user perspective that complements their own expert views.

References

[1]
Tessa Aarts, Linas K. Gabrielaitis, Lianne C. De Jong, Renee Noortman, Emma M. Van Zoelen, Sophia Kotea, Silvia Cazacu, Lesley L. Lock, and Panos Markopoulos. 2020. Design card sets: Systematic literature survey and card sorting study. In DIS 2020 - Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, Inc, 419--428. https://doi.org/10.1145/3357236.3395516
[2]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018). https://doi.org/10.1145/3173574
[3]
Ajaya Adhikari, Edwin Wenink, Jasper Van Der Waa, Cornelis Bouter, and Stephan Raaijmakers. 2022. Towards FAIR Explainable AI: A Standardized Ontology for Mapping XAI Solutions to Use Cases, Explanations, and AI Systems. Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA '22) (2022), 562--568. https://doi.org/10.1145/3529190.3535693
[4]
AlgorithmWatch. 2023. AlgorithmWatch demands the regulation of General Purpose AI in the AI Act. https://algorithmwatch.org/en/algorithmwatch-demands-regulation-of-general-purpose-ai/
[5]
BBC News. 2023. ChatGPT banned in Italy over privacy concerns. https://www.bbc.com/news/technology-65139406
[6]
Birgitta Bergvall-Kåreborn, Marita Holst, and Anna Ståhlbröst. 2009. Concept Design with a Living Lab Approach. In 2009 42nd Hawaii International Conference on System Sciences. IEEE, 1--10. https://doi.org/10.1109/HICSS.2009.123
[7]
Jaspreet Bhatia and Travis D. Breaux. 2018. Empirical Measurement of Perceived Privacy Risk. ACM Transactions on Computer-Human Interaction, Vol. 25, 6 (12 2018), 1--47. https://doi.org/10.1145/3267808
[8]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. 'It's Reducing a Human Being to a Percentage'. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Vol. 2018-April. ACM, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3173951
[9]
Nick Bostrom and Eliezer Yudkowsky. 2014. The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, 316--334. https://doi.org/10.1017/CBO9781139046855.020
[10]
Virginia Braun and Victoria Clarke. 2012. Thematic analysis. APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. (3 2012), 57--71. https://doi.org/10.1037/13620-004
[11]
Virginia Braun and Victoria Clarke. 2021. Thematic Analysis: A Practical Guide 1st ed.). SAGE Publications Ltd, London.
[12]
Philipp Brauner, Alexander Hick, Ralf Philipsen, and Martina Ziefle. 2023. What does the public think about artificial intelligence??A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, Vol. 5 (3 2023), 1113903. https://doi.org/10.3389/fcomp.2023.1113903
[13]
Alejandra Bringas Colmenarejo, Luca Nannini, Alisa Rieger, Kristen M. Scott, Xuan Zhao, Gourab K Patro, Gjergji Kasneci, and Katharina Kinder-Kurlanda. 2022. Fairness in Agreement With European Values. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA, 107--118. https://doi.org/10.1145/3514094.3534158
[14]
Jenna Burrell. 2016. How the machine ?thinks?: Understanding opacity in machine learning algorithms:. Big Data & Society, Vol. 3, 1 (1 2016), 1--12. https://doi.org/10.1177/2053951715622512
[15]
Claudio Buttice. 2023. When AI Facial Recognition Arrests Innocent People. https://www.techopedia.com/when-ai-facial-recognition-identifies-you-for-a-crime-you-didnt-commit
[16]
Council of the EU. 2023. Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world - Consilium. https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
[17]
Keeley Crockett, Matt Garratt, Annabel Latham, Edwin Colyer, and Sean Goltz. 2020. Risk and Trust Perceptions of the Public of Artifical Intelligence Applications. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8. https://doi.org/10.1109/IJCNN48605.2020.9207654
[18]
Paul De Hert, Vagelis Papakonstantinou, Gianclaudio Malgieri, Laurent Beslay, and Ignacio Sanchez. 2018. The right to data portability in the GDPR: Towards user-centric interoperability of digital services. Computer Law & Security Review, Vol. 34, 2 (4 2018), 193--203. https://doi.org/10.1016/j.clsr.2017.10.003
[19]
Advait Deshpande and Helen Sharp. 2022. Responsible AI Systems: Who are the Stakeholders?. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA, 227--236. https://doi.org/10.1145/3514094.3534187
[20]
Shipi Dhanorkar, Christine T. Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations across the AI Lifecycle. DIS 2021 - Proceedings of the 2021 ACM Designing Interactive Systems Conference: Nowhere and Everywhere (6 2021), 1591--1602. https://doi.org/10.1145/3461778.3462131
[21]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 275--285. https://doi.org/10.1145/3301275.3302310
[22]
Mary. Douglas. 1983. Risk and Culture : an Essay on the Selection of Technological and Environmental Dangers. (1983), 232.
[23]
Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daume. 2022. Seamful XAI: Operationalizing Seamful Design in Explainable AI. (11 2022). http://arxiv.org/abs/2211.06753
[24]
Mica R. Endsley. 2023. Ironies of artificial intelligence. Ergonomics, Vol. 66, 11 (11 2023), 1656--1668. https://doi.org/10.1080/00140139.2023.2243404
[25]
Johanne Mose Entwistle, Mia Kruse Rasmussen, Nervo Verdezoto, Robert S. Brewer, and Mads Schaarup Andersen. 2015. Beyond the Individual: The Contextual Wheel of Practice as a Research Framework for Sustainable HCI. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1125--1134. https://doi.org/10.1145/2702123.2702232
[26]
Motahhare Eslami. 2017. Understanding and Designing around Users' Interaction with Hidden Algorithms in Sociotechnical Systems. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17 Companion) (2017). https://doi.org/10.1145/3022198.3024947
[27]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I "like" it, then I hide it. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, Jofish Kaye, Allison Druin, Cliff Lampe, Dan Morris, and Juan Pablo Hourcade (Eds.). ACM, New York, NY, USA, 2371--2382. https://doi.org/10.1145/2858036.2858494
[28]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari, Bar On, Eric Gilbert, Karrie Karahalios, and Amit Elazari Bar On. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms towards Algorithmic Opacity and Transparency in Online Reviewing. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019). https://doi.org/10.1145/3290605
[29]
David Etkin. 2016. 3 - Disaster Risk. In Disaster Theory, David Etkin (Ed.). Butterworth-Heinemann, Boston, 53--101. https://doi.org/10.1016/B978-0--12--800227--8.00003-X
[30]
European Commission. 2019. Communication: Building Trust in Human Centric Artificial Intelligence | Shaping Europe's digital future. https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence
[31]
European Commission. 2021. Proposal for a Regulation laying down harmonised rules on artificial intelligence | Shaping Europe's digital future. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
[32]
European Commission. 2022. Regulatory framework proposal on artificial intelligence | Shaping Europe's digital future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[33]
Content European Commission, Directorate-General for Communications Networks and Ethics guidelines for trustworthy AI. Publications Office; 2019. Available from: doi/10.2759/346720 Technology. 2019. Ethics guidelines for trustworthy AI. 39 pages. https://doi.org/10.2759/177365
[34]
European Parliament. 2023. EU AI Act: first regulation on artificial intelligence | News | European Parliament.
[35]
Baruch Fischhoff, Paul Slovic, Sarah Lichtenstein, Stephen Read, and Barbara Combs. 1978. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sciences 1978 9:2, Vol. 9, 2 (4 1978), 127--152. https://doi.org/10.1007/BF00143739
[36]
Future of Life Institute. 2023. Policymaking in the Pause. (2023).
[37]
Isaac J. Gabriel and Easwar Nyshadham. 2008. A Cognitive Map of People's Online Risk Perceptions and Attitudes: An Empirical Study. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008). IEEE, 274--274. https://doi.org/10.1109/HICSS.2008.6
[38]
Colin Garvey. 2018. AI Risk Mitigation Through Democratic Governance: Introducing the 7-Dimensional AI Risk Horizon. AIES 2018 - Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (12 2018), 366--367. https://doi.org/10.1145/3278721.3278801
[39]
William W. Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. Conference on Human Factors in Computing Systems - Proceedings (2003), 233--240. https://doi.org/10.1145/642611.642653
[40]
Delaram Golpayegani, Harshvardhan J. Pandit, and Dave Lewis. 2023. To Be High-Risk, or Not To Be - Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards. ACM International Conference Proceeding Series (6 2023), 905--915. https://doi.org/10.1145/3593013.3594050
[41]
Greg Guest, Emily Namey, and Kevin McKenna. 2017. How Many Focus Groups Are Enough? Building an Evidence Base for Nonprobability Sample Sizes. Field Methods, Vol. 29, 1 (2 2017), 3--22. https://doi.org/10.1177/1525822X16639015/ASSET/IMAGES/LARGE/10.1177_1525822X16639015-FIG3.JPEG
[42]
Harry Hochheiser and Jonathan Lazar. 2007. HCI and societal issues: A framework for engagement. International Journal of Human [# x02013] Computer Interaction, Vol. 23, 3 (2007), 339--374.
[43]
Steven J. Jackson, Tarleton Gillespie, and Sandy Payette. 2014. The policy knot. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, New York, NY, USA, 588--602. https://doi.org/10.1145/2531602.2531674
[44]
Alon Jacovi, Ana Marasoviç, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (3 2021), 624--635. https://doi.org/10.1145/3442188.3445923
[45]
Timo Jakobi, Maximilian von Grafenstein, Christine Legner, Clément Labadie, Peter Mertens, Ayten Öksüz, and Gunnar Stevens. 2020. The role of IS in the conflicting interests regarding GDPR. Business & Information Systems Engineering, Vol. 62, 3 (2020), 261--272. https://doi.org/10.1007/s12599-020-00633--4
[46]
Timo Jakobi, Maximilian von Grafenstein, Patrick Smieskol, and Gunnar Stevens. 2022. A Taxonomy of user-perceived privacy risks to foster accountability of data-based services. Journal of Responsible Technology, Vol. 10 (7 2022). https://doi.org/10.1016/J.JRT.2022.100029
[47]
Kai Jia and Nan Zhang. 2022. Categorization and eccentricity of AI risks: a comparative study of the global AI guidelines. Electronic Markets, Vol. 32, 1 (3 2022), 59--71. https://doi.org/10.1007/S12525-021-00480--5/TABLES/3
[48]
Sabrina Karwatzki, Manuel Trenz, Virpi Kristiina Tuunainen, and Daniel Veit. 2018. Adverse consequences of access to individuals? information: an analysis of perceptions and the scope of organisational influence. European Journal of Information Systems, Vol. 26, 6 (11 2018), 688--715. https://doi.org/10.1057/S41303-017-0064-Z
[49]
Scott R Klemmer, Björn Hartmann, and Leila Takayama. 2006. How Bodies Matter: Five Themes for Interaction Design. Proceedings of the 6th ACM conference on Designing Interactive systems - DIS '06 (2006), 140--149. https://doi.org/10.1145/1142405
[50]
P. M. Krafft, Meg Young, Michael Katell, Karen Huang, and Ghislain Bugingo. 2020. Defining AI in Policy versus Practice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA, 72--78. https://doi.org/10.1145/3375627.3375835
[51]
Dennis Lawo, Thomas Neifer, Margarita Esau-Held, and Gunnar Stevens. 2023. Digital Sovereignty: What it is and why it matters for HCI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--7. https://doi.org/10.1145/3544549.3585834
[52]
Jonathan Lazar, Julio Abascal, Simone Barbosa, Jeremy Barksdale, Batya Friedman, Jens Grossklags, Jan Gulliksen, Jeff Johnson, Tom McEwan, Loïc Martínez-Normand, Wibke Michalk, Janice Tsai, Gerrit van der Veer, Hans Axelson, Ake Walldius, Gill Whitney, Marco Winckler, Volker Wulf, Elizabeth F. Churchill, Lorrie Cranor, Janet Davis, Alan Hedge, Harry Hochheiser, Juan Pablo Hourcade, Clayton Lewis, Lisa Nathan, Fabio Paterno, Blake Reid, Whitney Quesenbery, Ted Selker, and Brian Wentz. 2016. Human--Computer Interaction and International Public Policymaking: A Framework for Understanding and Taking Future Actions. Foundations and Trends® in Human--Computer Interaction, Vol. 9, 2 (2016), 69--149. https://doi.org/10.1561/1100000062
[53]
J. D. Lee and K. A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, Vol. 46, 1 (1 2004), 50--80. https://doi.org/10.1518/hfes.46.1.50_30392
[54]
Moon-Hwan Lee, Da-Hoon Kim, Hyun-Jeong Kim, and Tek-Jin Nam. 2012. Understanding Impacts of Hidden Interfaces on Mobile Phone User Experience. In Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design (NordiCHI '12) (2012), 45--48. https://doi.org/10.1145/2399016.2399024
[55]
Jun Li, Jing Wang, Shanyong Wangh, and Yu Zhou. 2019. Mobile Payment with Alipay: An Application of Extended Technology Acceptance Model. IEEE Access, Vol. 7 (2019). https://doi.org/10.1109/ACCESS.2019.2902905
[56]
Mengyao Li, Brittany E. Holthausen, Rachel E. Stuck, and Bruce N. Walker. 2019. No Risk No Trust. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, New York, NY, USA, 177--185. https://doi.org/10.1145/3342197.3344525
[57]
Aale Luusua and Johanna Ylipulli. 2020. Artificial Intelligence and Risk in Design. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. ACM, New York, NY, USA, 1235--1244. https://doi.org/10.1145/3357236.3395491
[58]
Akihiro Maehigashi, Yosuke Fukuchi, and Seiji Yamada. 2023. Modeling Reliance on XAI Indicating Its Purpose and Attention. (2 2023). http://arxiv.org/abs/2302.08067
[59]
Travis Mandel, Jahnu Best, Randall H. Tanaka, Hiram Temple, Chansen Haili, Sebastian J. Carter, Kayla Schlechtinger, and Roy Szeto. 2020. Using the Crowd to Prevent Harmful AI Behavior. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (10 2020), 1--25. https://doi.org/10.1145/3415168
[60]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2022. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys, Vol. 54, 6 (7 2022), 1--35. https://doi.org/10.1145/3457607
[61]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency (2019). https://doi.org/10.1145/3287560
[62]
Hugo Neri and Fabio Cozman. 2020. The role of experts in the public perception of risk of artificial intelligence. AI and Society, Vol. 35, 3 (9 2020), 663--673. https://doi.org/10.1007/S00146-019-00924--9/FIGURES/4
[63]
Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi. 2023. Taking AI risks seriously: a new assessment model for the AI Act. AI & SOCIETY, Vol. 1 (7 2023), 1--5. https://doi.org/10.1007/s00146-023-01723-z
[64]
Cecilia Panigutti, Ronan Hamon, Isabelle Hupont, David Fernandez Llorca, Delia Fano Yela, Henrik Junklewitz, Salvatore Scalzo, Gabriele Mazzini, Ignacio Sanchez, Josep Soler Garrido, and Emilia Gomez. 2023. The role of explainable AI in the context of the AI Act. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 1139--1150. https://doi.org/10.1145/3593013.3594069
[65]
Dominik Pins, Timo Jakobi, Gunnar Stevens, Fatemeh Alizadeh, and Jana Krüger. 2022. Finding, getting and understanding: the user journey for the GDPR's right to access. Behaviour & Information Technology, Vol. 41, 10 (7 2022), 2174--2200. https://doi.org/10.1080/0144929X.2022.2074894
[66]
Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where Responsible AI meets Reality. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (4 2021), 1--23. https://doi.org/10.1145/3449081
[67]
Lena Recki, Dennis Lawo, Veronika Krauß, and Dominik Pins. 2023. A Qualitative Exploration of User-Perceived Risks of AI to Inform Design and Policy. Fröhlich, Cobus (Hg.): Mensch und Computer 2023--Workshopband, 03.-06. September 2023, Rapperswil (SG). Gesellschaft für Informatik eV., Vol. 1 (2023). https://doi.org/10.18420/muc2023-mci-ws16--383
[68]
Stephanie Rosenbaum, Gilbert Cockton, Kara Coyne, Michael Muller, and Thyra Rauch. 2002. Focus Groups in HCI: Wealth of Information or Waste of Resources? CHI '02 extended abstracts on Human factors in computing systems - CHI '02 (2002). https://doi.org/10.1145/506443
[69]
Conrad Sanderson, David Douglas, Qinghua Lu, Emma Schleiger, Jon Whittle, Justine Lacey, Glenn Newnham, Stefan Hajkowicz, Cathy Robinson, and David Hansen. 2023. AI Ethics Principles in Practice: Perspectives of Designers and Developers. IEEE Transactions on Technology and Society, Vol. 4, 2 (6 2023), 171--187. https://doi.org/10.1109/TTS.2023.3257303
[70]
Lindsay Sanneman and Julie A. Shah. 2022. The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human--Computer Interaction, Vol. 38, 18--20 (12 2022), 1772--1788. https://doi.org/10.1080/10447318.2022.2081282
[71]
Daniel Schiff, Justin Biddle, Jason Borenstein, and Kelly Laas. 2020. What's Next for AI Ethics, Policy, and Governance? A Global Overview. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA, 153--158. https://doi.org/10.1145/3375627.3375804
[72]
Paul Slovic. 1990. Perceptions of Risk: Reflections on the Psychometric Paradigm. (1990).
[73]
Paul Slovic. 2000. The Perception of Risk 1st edition ed.). Taylor and Francis Inc. 1--473 pages. https://doi.org/10.4324/9781315661773
[74]
Timo Speith. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. ACM International Conference Proceeding Series (6 2022), 2239--2250. https://doi.org/10.1145/3531146.3534639
[75]
Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization. ACM, New York, NY, USA, 1--9. https://doi.org/10.1145/3465416.3483305
[76]
Max von Grafenstein, Timo Jakobi, and Gunnar Stevens. 2022. Effective data protection by design through interdisciplinary research methods: The example of effective purpose specification by applying user-Centred UX-design methods. Computer Law & Security Review, Vol. 46 (2022), 105722. https://doi.org/10.1016/j.clsr.2022.105722
[77]
Qiaosi Wang, Michael Madaio, Shaun Kane, Shivani Kapania, Michael Terry, and Lauren Wilcox. 2023. Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--16. https://doi.org/10.1145/3544548.3581278
[78]
George W. Warren and Ragnar Lofstedt. 2021. COVID-19 vaccine rollout risk communication strategies in Europe: a rapid response. Journal of Risk Research, Vol. 24, 3--4 (4 2021), 369--379. https://doi.org/10.1080/13669877.2020.1870533
[79]
Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2022. Taxonomy of Risks posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 214--229. https://doi.org/10.1145/3531146.3533088
[80]
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. I"do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (7 2019), 7--9. https://doi.org/10.1145/3308532.3329441
[81]
Allison Woodruff, Sarah E Fox, Steven Rousso-Schindler, and Jeff Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018). https://doi.org/10.1145/3173574

Index Terms

  1. "You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCI

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW2
    CSCW
    November 2024
    5177 pages
    EISSN:2573-0142
    DOI:10.1145/3703902
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 November 2024
    Published in PACMHCI Volume 8, Issue CSCW2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ai systems
    2. eu ai act
    3. qualitative research
    4. responsible ai
    5. risk perception

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 130
      Total Downloads
    • Downloads (Last 12 months)130
    • Downloads (Last 6 weeks)130
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media