Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3630106.3659051acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives

Published: 05 June 2024 Publication History

Abstract

Human oversight is currently discussed as a potential safeguard to counter some of the negative aspects of high-risk AI applications. This prompts a critical examination of the role and conditions necessary for what is prominently termed effective or meaningful human oversight of these systems. This paper investigates effective human oversight by synthesizing insights from psychological, legal, philosophical, and technical domains. Based on the claim that the main objective of human oversight is risk mitigation, we propose a viable understanding of effectiveness in human oversight: for human oversight to be effective, the oversight person has to have (a) sufficient causal power with regard to the system and its effects, (b) suitable epistemic access to relevant aspects of the situation, (c) self-control, and (d) fitting intentions for their role. Furthermore, we argue that this is equivalent to saying that an oversight person is effective if and only if they are morally responsible and have fitting intentions. Against this backdrop, we suggest facilitators and inhibitors of effectiveness in human oversight when striving for practical applicability. We discuss factors in three domains, namely, the technical design of the system, individual factors of oversight persons, and the environmental circumstances in which they operate. Finally, this paper scrutinizes the upcoming AI Act of the European Union – in particular Article 14 on Human Oversight – as an exemplary regulatory framework in which we study the practicality of our understanding of effective human oversight. By analyzing the provisions and implications of the European AI Act proposal, we pinpoint how far that proposal aligns with our analyses regarding effective human oversight as well as how it might get enriched by our conceptual understanding of effectiveness in human oversight.

References

[1]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82–115.
[2]
J. Elin Bahner, Monika F. Elepfandt, and Dietrich Manzey. 2008. Misuse of Diagnostic Aids in Process Control: The Effects of Automation Misses on Complacency and Automation Bias: (578262012-006). https://doi.org/10.1037/e578262012-006
[3]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (Oct. 2019), 2–11. https://doi.org/10.1609/hcomp.v7i1.5285
[4]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). ACM. https://doi.org/10.1145/3411764.3445717
[5]
Megan L. Bartlett and Jason S. McCarley. 2020. Ironic efficiency in automation-aided signal detection. Ergonomics 64, 1 (Aug. 2020), 103–112. https://doi.org/10.1080/00140139.2020.1809716
[6]
Deborah Baum, Kevin Baum, Timo P. Gros, and Verena Wolf. 2023. XAI Requirements in Smart Production Processes: A Case Study. In Explainable Artificial Intelligence, Luca Longo (Ed.). Springer Nature Switzerland, Cham, 3–24.
[7]
Kevin Baum, Susanne Mantel, Eva Schmidt, and Timo Speith. 2022. From Responsibility to Reason-Giving Explainable Artificial Intelligence. Philosophy & Technology 35, 1 (2022), 12. https://doi.org/10.1007/s13347-022-00510-w
[8]
Jan Baumeister, Bernd Finkbeiner, Sebastian Schirmer, Maximilian Schwenger, and Christoph Torens. 2020. RTLola Cleared for Take-Off: Monitoring Autonomous Aircraft. In CAV 2020(LNCS, Vol. 12225). Springer, 28–39. https://doi.org/10.1007/978-3-030-53291-8_3
[9]
Sebastian Biewer, Kevin Baum, Sarah Sterz, Holger Hermanns, Sven Hetmank, Markus Langer, Anne Lauber-Rönsberg, and Franz Lehr. 2024. Software doping analysis for human oversight. Formal Methods in System Design (April 2024). https://doi.org/10.1007/s10703-024-00445-2
[10]
Mark Bovens. 2007. Analysing and Assessing Accountability: A Conceptual Framework1. European Law Journal 13, 4 (June 2007), 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
[11]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–21. https://doi.org/10.1145/3449287
[12]
Paul T. Costa, Robert R. McCrae, and David A. Dye. 1991. Facet Scales for Agreeableness and Conscientiousness: A Revision of the NEO Personality Inventory. Personality and Individual Differences 12, 9 (1991), 887–898. https://doi.org/10.1016/0191-8869(91)90177-d
[13]
Council of the European Union. 2022. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – General Approach, 14954/22. https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf.
[14]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). ACM. https://doi.org/10.1145/3313831.3376638
[15]
Luca Deck, Astrid Schoemäcker, Timo Speith, Jakob Schöffer, Lena Kästner, and Niklas Kühl. 2024. Mapping the Potential of Explainable Artificial Intelligence (XAI) for Fairness Along the AI Lifecycle. arXiv preprint arXiv:2404.18736 (2024).
[16]
Evangelia Demerouti, Arnold B. Bakker, Friedhelm Nachreiner, and Wilmar B. Schaufeli. 2001. The job demands-resources model of burnout.Journal of Applied Psychology 86, 3 (2001), 499–512. https://doi.org/10.1037//0021-9010.86.3.499
[17]
Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, and Mark O Riedl. 2022. Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. In CHI Conference on Human Factors in Computing Systems Extended Abstracts(CHI ’22). ACM. https://doi.org/10.1145/3491101.3503727
[18]
Birte Englich, Thomas Mussweiler, and Fritz Strack. 2006. Playing Dice With Criminal Sentences: The Influence of Irrelevant Anchors on Experts’ Judicial Decision Making. Personality and Social Psychology Bulletin 32, 2 (Feb. 2006), 188–200. https://doi.org/10.1177/0146167205282152
[19]
Lena Enqvist. 2023. ‘Human oversight’in the EU artificial intelligence act: what, when and by whom?Law, Innovation and Technology 15, 2 (2023), 508–535.
[20]
European Commission. 2021. Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (proposal for a regulation) no 0106/2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.
[21]
European Parliament. 2023. Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), P9_TA(2023)0236. https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf.
[22]
European Parliament. 2024. Corrigendum to the position of the European Parliament adopted at first reading on 13 March 2024 with a view to the adoption of Regulation (EU) 2024/...... of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) P9_TA(2024)0138 (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)). https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf.
[23]
European Parliament. 2024. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), P9_TA(2024)0138. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf.
[24]
European Union. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679.
[25]
João Marcelo Evangelista Belo, Mathias N. Lystbæk, Anna Maria Feit, Ken Pfeuffer, Peter Kán, Antti Oulasvirta, and Kaj Grønbæk. 2022. AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology(UIST ’22). ACM. https://doi.org/10.1145/3526113.3545651
[26]
John Martin Fischer and Mark Ravizza. 1998. Responsibility and Control: A Theory of Moral Responsibility. Cambridge University Press, New York.
[27]
Adrian Furnham and Hua Chu Boo. 2011. A literature review of the anchoring effect. The Journal of Socio-Economics 40, 1 (Feb. 2011), 35–42. https://doi.org/10.1016/j.socec.2010.10.008
[28]
Fei Gao, Mary L. Cummings, and Erin Treacy Solovey. 2014. Modeling Teamwork in Supervisory Control of Multiple Robots. IEEE Transactions on Human-Machine Systems 44, 4 (Aug. 2014), 441–453. https://doi.org/10.1109/thms.2014.2312391
[29]
Susanne Gaube, Harini Suresh, Martina Raue, Eva Lermer, Timo K. Koch, Matthias F. C. Hudecek, Alun D. Ackery, Samir C. Grover, Joseph F. Coughlin, Dieter Frey, Felipe C. Kitamura, Marzyeh Ghassemi, and Errol Colak. 2023. Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays. Scientific Reports 13, 1 (Jan. 2023). https://doi.org/10.1038/s41598-023-28633-w
[30]
Government of Canada. 2023. Directive on Automated Decision-Making. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32746&section=html
[31]
Ben Green. 2022. The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review 45 (2022), 105681. https://doi.org/10.1016/j.clsr.2022.105681
[32]
Ben Green and Yiling Chen. 2019. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). ACM. https://doi.org/10.1145/3287560.3287563
[33]
J.Richard Hackman and Greg R. Oldham. 1976. Motivation through the design of work: test of a theory. Organizational Behavior and Human Performance 16, 2 (Aug. 1976), 250–279. https://doi.org/10.1016/0030-5073(76)90016-7
[34]
Sarah Haggenmüller, Roman C. Maron, Achim Hekler, Jochen S. Utikal, Catarina Barata, Raymond L. Barnhill, Helmut Beltraminelli, Carola Berking, Brigid Betz-Stablein, Andreas Blum, Stephan A. Braun, Richard Carr, Marc Combalia, Maria-Teresa Fernandez-Figueras, Gerardo Ferrara, Sylvie Fraitag, Lars E. French, Frank F. Gellrich, Kamran Ghoreschi, Matthias Goebeler, Pascale Guitera, Holger A. Haenssle, Sebastian Haferkamp, Lucie Heinzerling, Markus V. Heppt, Franz J. Hilke, Sarah Hobelsberger, Dieter Krahl, Heinz Kutzner, Aimilios Lallas, Konstantinos Liopyris, Mar Llamas-Velasco, Josep Malvehy, Friedegund Meier, Cornelia S.L. Müller, Alexander A. Navarini, Cristián Navarrete-Dechent, Antonio Perasole, Gabriela Poch, Sebastian Podlipnik, Luis Requena, Veronica M. Rotemberg, Andrea Saggini, Omar P. Sangueza, Carlos Santonja, Dirk Schadendorf, Bastian Schilling, Max Schlaak, Justin G. Schlager, Mildred Sergon, Wiebke Sondermann, H. Peter Soyer, Hans Starz, Wilhelm Stolz, Esmeralda Vale, Wolfgang Weyers, Alexander Zink, Eva Krieghoff-Henning, Jakob N. Kather, Christof von Kalle, Daniel B. Lipka, Stefan Fröhling, Axel Hauschild, Harald Kittler, and Titus J. Brinker. 2021. Skin cancer classification via convolutional neural networks: systematic review of studies involving human experts. European Journal of Cancer 156 (2021), 202–216. https://doi.org/10.1016/j.ejca.2021.06.049
[35]
Angela T. Hall, Dwight D. Frink, and M. Ronald Buckley. 2015. An accountability account: A review and synthesis of the theoretical and empirical research on felt accountability. Journal of Organizational Behavior 38, 2 (Sept. 2015), 204–224. https://doi.org/10.1002/job.2052
[36]
Martin Hilbert. 2012. Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making.Psychological Bulletin 138, 2 (March 2012), 211–237. https://doi.org/10.1037/a0025940
[37]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[38]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
[39]
Stephen E. Humphrey, Jennifer D. Nahrgang, and Frederick P. Morgeson. 2007. Integrating motivational, social, and contextual work design features: A meta-analytic summary and theoretical extension of the work design literature.Journal of Applied Psychology 92, 5 (2007), 1332–1356. https://doi.org/10.1037/0021-9010.92.5.1332
[40]
Jinglu Jiang, Surinder Kahai, and Ming Yang. 2022. Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies 165 (2022), 102839. https://doi.org/10.1016/j.ijhcs.2022.102839
[41]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (Sept. 2019), 389–399. https://doi.org/10.1038/s42256-019-0088-2
[42]
Meg Leta Jones. 2017. The right to a human in the loop: Political constructions of computer automation and personhood. Social Studies of Science 47, 2 (2017), 216–239. https://doi.org/10.1177/0306312717699716 28406392.
[43]
Lynda A. King and Daniel W. King. 1990. Role conflict and role ambiguity: A critical assessment of construct validity.Psychological Bulletin 107, 1 (1990), 48–64. https://doi.org/10.1037//0033-2909.107.1.48
[44]
Israel Koren and C Mani Krishna. 2020. Fault-tolerant systems. Morgan Kaufmann.
[45]
Riikka Koulu. 2020. Proceduralizing control and discretion: Human oversight in artificial intelligence policy. Maastricht Journal of European and Comparative Law 27 (2020), 720 – 735. https://api.semanticscholar.org/CorpusID:229412494
[46]
Vivian Lai, Chacha Chen, Alison Smith-Renner, Q. Vera Liao, and Chenhao Tan. 2023. Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. In 2023 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’23). ACM. https://doi.org/10.1145/3593013.3594087
[47]
Markus Langer and Richard N. Landers. 2021. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior 123 (Oct. 2021), 106878. https://doi.org/10.1016/j.chb.2021.106878
[48]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (July 2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
[49]
Johann Laux. 2023. Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act. AI & SOCIETY (Oct. 2023). https://doi.org/10.1007/s00146-023-01777-z
[50]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[51]
Chiara Longoni, Andrea Bonezzi, and Carey K Morewedge. 2019. Resistance to Medical Artificial Intelligence. Journal of Consumer Research 46, 4 (May 2019), 629–650. https://doi.org/10.1093/jcr/ucz013
[52]
Joseph B. Lyons and Svyatoslav Y. Guznov. 2018. Individual differences in human–machine trust: A multi-study look at the perfect automation schema. Theoretical Issues in Ergonomics Science 20, 4 (Nov. 2018), 440–458. https://doi.org/10.1080/1463922x.2018.1491071
[53]
Christina Maslach, Susan E Jackson, and Michael P Leiter. 1997. Maslach burnout inventory. Scarecrow Education.
[54]
Sara E. McBride, Wendy A. Rogers, and Arthur D. Fisk. 2013. Understanding human management of automation errors. Theoretical Issues in Ergonomics Science 15, 6 (Aug. 2013), 545–577. https://doi.org/10.1080/1463922x.2013.817625
[55]
Leila Methnani, Andrea Aler Tubella, Virginia Dignum, and Andreas Theodorou. 2021. Let Me Take Over: Variable Autonomy for Meaningful Human Control. Frontiers in Artificial Intelligence 4 (2021). https://doi.org/10.3389/frai.2021.737072
[56]
Kathleen L. Mosier, Linda J. Skitka, Mark D. Burdick, and Susan T. Heers. 1996. Automation Bias, Accountability, and Verification Behaviors. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, 4 (Oct. 1996), 204–208. https://doi.org/10.1177/154193129604000413
[57]
Ali Bou Nassif, Manar Abu Talib, Qassim Nasir, and Fatima Mohamad Dakalbab. 2021. Machine Learning for Anomaly Detection: A Systematic Review. IEEE Access 9 (2021), 78658–78700. https://doi.org/10.1109/ACCESS.2021.3083060
[58]
Naoshi Nishida, Makoto Yamakawa, Tsuyoshi Shiina, Yoshito Mekada, Mutsumi Nishida, Naoya Sakamoto, Takashi Nishimura, Hiroko Iijima, Toshiko Hirai, Ken Takahashi, Masaya Sato, Ryosuke Tateishi, Masahiro Ogawa, Hideaki Mori, Masayuki Kitano, Hidenori Toyoda, Chikara Ogawa, and Masatoshi Kudo. 2022. Artificial intelligence (AI) models for the ultrasonographic diagnosis of liver tumors and comparison of diagnostic accuracies between AI and human experts. Journal of Gastroenterology 57, 4 (Feb. 2022), 309–321. https://doi.org/10.1007/s00535-022-01849-9
[59]
Merel Noorman. 2020. Computing and Moral Responsibility. In The Stanford Encyclopedia of Philosophy (Spring 2020 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[60]
Donald A. Norman. 1983. Design principles for human-computer interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1983, Boston, Massachusetts, USA, December 12-15, 1983, Raoul N. Smith, Richard W. Pew, and Ann Janda (Eds.). ACM, 1–10. https://doi.org/10.1145/800045.801571
[61]
Daria Onitiu. 2023. The limits of explainability & human oversight in the EU Commission’s proposal for the Regulation on AI- a critical approach focusing on medical diagnostic systems. Information & Communications Technology Law 32, 2 (2023), 170–188. https://doi.org/10.1080/13600834.2022.2116354 arXiv:https://doi.org/10.1080/13600834.2022.2116354
[62]
Raja Parasuraman and Dietrich H. Manzey. 2010. Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors: The Journal of the Human Factors and Ergonomics Society 52, 3 (June 2010), 381–410. https://doi.org/10.1177/0018720810376055
[63]
Sharon K. Parker and Gudela Grote. 2020. Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Applied Psychology 71, 4 (Feb. 2020), 1171–1204. https://doi.org/10.1111/apps.12241
[64]
Franziska Perels, Tina Gürtler, and Bernhard Schmitz. 2005. Training of self-regulatory and problem-solving competence. Learning and Instruction 15, 2 (April 2005), 123–139. https://doi.org/10.1016/j.learninstruc.2005.04.010
[65]
Tobias Rieger and Dietrich Manzey. 2022. Understanding the Impact of Time Pressure and Automation Support in a Visual Search Task. Human Factors: The Journal of the Human Factors and Ergonomics Society (June 2022), 001872082211112. https://doi.org/10.1177/00187208221111236
[66]
Neal J. Roese and Kathleen D. Vohs. 2012. Hindsight Bias. Perspectives on Psychological Science 7, 5 (Sept. 2012), 411–426. https://doi.org/10.1177/1745691612454303
[67]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (May 2019), 206–215. https://doi.org/10.1038/s42256-019-0048-x
[68]
Richard M. Ryan and Edward L. Deci. 2000. Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. Contemporary Educational Psychology 25, 1 (Jan. 2000), 54–67. https://doi.org/10.1006/ceps.1999.1020
[69]
Juergen Sauer and Alain Chavaillaz. 2017. How operators make use of wide-choice adaptable automation: observations from a series of experimental studies. Theoretical Issues in Ergonomics Science 19, 2 (March 2017), 135–155. https://doi.org/10.1080/1463922x.2017.1297866
[70]
Nadine Schlicker, Markus Langer, Sonja K. Ötting, Kevin Baum, Cornelius J. König, and Dieter Wallach. 2021. What to expect from opening up ’black boxes’? Comparing perceptions of justice between human and automated agents. Comput. Hum. Behav. 122 (2021), 106837. https://doi.org/10.1016/j.chb.2021.106837
[71]
Jakob Schoeffer, Johannes Jakubik, Michael Voessing, Niklas Kuehl, and Gerhard Satzger. 2023. On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted Decision-Making. IOS Press. https://doi.org/10.3233/faia230074
[72]
Michael J. Shaffer. 2012. Epistemic Access, Confirmation, and Idealization. Palgrave Macmillan UK, London, 101–144. https://doi.org/10.1057/9781137271587_4
[73]
Sabine Sonnentag, Louis Tay, and Hadar Nesher Shoshan. 2023. A review on health and well‐being at work: More than stressors and strains. Personnel Psychology 76, 2 (Jan. 2023), 473–510. https://doi.org/10.1111/peps.12572
[74]
Charles Spearman. 1904. " General Intelligence" Objectively Determined and Measured.15 (1904), 201–292.
[75]
Timo Speith. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). ACM. https://doi.org/10.1145/3531146.3534639
[76]
T. Speith, B. Crook, S. Mann, A. Schomäcker, and M. Langer. In press. Conceptualizing Understanding in Explainable Artificial Intelligence (XAI): An Abilities-Based Approach. Ethics and Information Technology (In press).
[77]
Sarah Sterz, Kevin Baum, Anne Lauber-Rönsberg, and Holger Hermanns. 2021. Towards Perspicuity Requirements. In 29th IEEE International Requirements Engineering Conference Workshops, RE 2021 Workshops, Notre Dame, IN, USA, September 20-24, 2021, Tao Yue and Mehdi Mirakhorli (Eds.). IEEE, 159–163. https://doi.org/10.1109/REW53955.2021.00029
[78]
Eleni Straitouri and Manuel Gomez Rodriguez. 2023. Designing Decision Support Systems Using Counterfactual Prediction Sets. CoRR abs/2306.03928 (2023). https://doi.org/10.48550/ARXIV.2306.03928 arXiv:2306.03928
[79]
Eleni Straitouri, Lequn Wang, Nastaran Okati, and Manuel Gomez Rodriguez. 2023. Improving Expert Predictions with Conformal Prediction. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA(Proceedings of Machine Learning Research, Vol. 202), Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (Eds.). PMLR, 32633–32653. https://proceedings.mlr.press/v202/straitouri23a.html
[80]
Franz Strich, Anne-Sophie Mayer, and Marina Fiedler. 2021. What Do I Do in a World of Artificial Intelligence? Investigating the Impact of Substitutive Decision-Making AI Systems on Employees’ Professional Role Identity. Journal of the Association for Information Systems 22, 2 (2021), 304–324. https://doi.org/10.17705/1jais.00663
[81]
Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, 2023. Human-Centered Responsible Artificial Intelligence: Current & Future Trends. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–4.
[82]
Matthew Talbert. 2019. Moral Responsibility. In The Stanford Encyclopedia of Philosophy (Winter 2019 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[83]
Amos Tversky and Daniel Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5, 2 (Sept. 1973), 207–232. https://doi.org/10.1016/0010-0285(73)90033-9
[84]
Anna-Sophie Ulfert, Eleni Georganta, Carolina Centeio Jorge, Siddharth Mehrotra, and Myrthe Tielman. 2023. Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework. European Journal of Work and Organizational Psychology (April 2023), 1–14. https://doi.org/10.1080/1359432x.2023.2200172
[85]
Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S. Bernstein, and Ranjay Krishna. 2023. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 2023), 1–38. https://doi.org/10.1145/3579605
[86]
Michael Veale and Frederik J. Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act. CoRR abs/2107.03721 (2021). arXiv:2107.03721https://arxiv.org/abs/2107.03721
[87]
Joel S. Warm, Raja Parasuraman, and Gerald Matthews. 2008. Vigilance Requires Hard Mental Work and Is Stressful. Human Factors: The Journal of the Human Factors and Ergonomics Society 50, 3 (June 2008), 433–441. https://doi.org/10.1518/001872008x312152
[88]
Russell T. Warne and Cassidy Burningham. 2018. Spearman’s g Found in 31 Non-Western Nations: Strong Evidence that g is a Universal Phenomenon. (March 2018). https://doi.org/10.31234/osf.io/uv673
[89]
John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan. 2019. Algorithmic Decision-Making and the Control Problem. Minds and Machines 29, 4 (Dec. 2019), 555–578. https://doi.org/10.1007/s11023-019-09513-7
[90]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). ACM. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2024)Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder ExplorationProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685334(1-13)Online publication date: 13-Oct-2024
  • (2024)Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair OutputsMinds and Machines10.1007/s11023-024-09701-035:1Online publication date: 5-Nov-2024
  • (2024)Safety and Reliability of Artificial Intelligence SystemsArtificial Intelligence for Safety and Reliability Engineering10.1007/978-3-031-71495-5_9(185-199)Online publication date: 29-Sep-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. AI Act
  2. High-Risk AI
  3. Human Oversight
  4. Law
  5. Psychology

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)696
  • Downloads (Last 6 weeks)192
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder ExplorationProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685334(1-13)Online publication date: 13-Oct-2024
  • (2024)Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair OutputsMinds and Machines10.1007/s11023-024-09701-035:1Online publication date: 5-Nov-2024
  • (2024)Safety and Reliability of Artificial Intelligence SystemsArtificial Intelligence for Safety and Reliability Engineering10.1007/978-3-031-71495-5_9(185-199)Online publication date: 29-Sep-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media