Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3531146.3534628acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

The Conflict Between Explainable and Accountable Decision-Making Algorithms

Published: 20 June 2022 Publication History

Abstract

Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.

References

[1]
Heike, Eduard Fosch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. 2019. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society 6, 1 (2019), 2053951719860542.
[2]
Julia Angwin, Madeleine Varner, and Ariana Tobin. 2016. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks.https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[4]
Peter M Asaro. 2016. The Liability Problem for Autonomous Artificial Agents. In AAAI Spring Symposia. 190–194.
[5]
Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. Crowdsourcing moral machines. Commun. ACM 63, 3 (2020), 48–55.
[6]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-ai team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2–11.
[7]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[8]
Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Calif. L. Rev. 104(2016), 671.
[9]
Solon Barocas, Andrew D Selbst, and Manish Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 80–89.
[10]
Adrien Bibal, Michael Lognoul, Alexandre de Streel, and Benoît Frénay. 2020. Legal requirements on explainability in machine learning. Artificial Intelligence and Law(2020), 1–21.
[11]
Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. The moral psychology of AI and the ethical opt-out problem. Oxford University Press, Oxford, UK.
[12]
Mark Bovens. 2007. Analysing and assessing accountability: A conceptual framework. European law journal 13, 4 (2007), 447–468.
[13]
Harry Brignull, Marc Miquel, Jeremy Rosenberg, and James Offer. 2015. Dark Patterns-User Interfaces Designed to Trick People.
[14]
Bartosz Brożek and Bartosz Janik. 2019. Can artificial intelligences be moral agents?New Ideas in Psychology 54 (2019), 101–106.
[15]
Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues 8(2010), 63–74.
[16]
Joanna J Bryson, Mihailis E Diamantis, and Thomas D Grant. 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 25, 3 (2017), 273–291.
[17]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–21.
[18]
Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512.
[19]
Stephen Cave, Claire Craig, Kanta Dihal, Sarah Dillon, Jessica Montgomery, Beth Singler, and Lindsay Taylor. 2018. Portrayals and perceptions of AI and why they matter. (2018).
[20]
Paulius Čerka, Jurgita Grigienė, and Gintarė Sirbikytė. 2015. Liability for damages caused by artificial intelligence. Computer Law & Security Review 31, 3 (2015), 376–389.
[21]
Marc Champagne and Ryan Tonkens. 2015. Bridging the responsibility gap in automated warfare. Philosophy & Technology 28, 1 (2015), 125–137.
[22]
Jennifer Cobbe, Michelle Seng Ah Lee, and Jatinder Singh. 2021. Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
[23]
Mark Coeckelbergh. 2009. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI & Society 24, 2 (2009), 181–189.
[24]
Mark Coeckelbergh. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 (2020), 2051–2068.
[25]
ACM US Public Policy Council. 2017. Statement on algorithmic transparency and accountability. Commun. ACM (2017).
[26]
John Danaher. 2016. Robots, law and the retribution gap. Ethics and Information Technology 18, 4 (2016), 299–309.
[27]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on privacy enhancing technologies 2015, 1(2015), 92–112.
[28]
Paul B De Laat. 2018. Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability?Philosophy & technology 31, 4 (2018), 525–541.
[29]
Celso de Melo, Jonathan Gratch, and Peter Carnevale. 2014. The importance of cognition and affect for artificially intelligent decision makers. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 28.
[30]
Filippo Santoni de Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology(2021), 1–28.
[31]
Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O’Brien, Kate Scott, Stuart Schieber, James Waldo, David Weinberger, 2017. Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134(2017).
[32]
Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, 2021. The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint arXiv:2107.13509(2021).
[33]
Upol Ehsan and Mark O Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv preprint arXiv:2109.12480(2021).
[34]
Madeleine Clare Elish. 2019. Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society(2019).
[35]
European Commission. 2019. Liability for artificial intelligence and other emerging digital technologies. https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en/format-PDF
[36]
European Commission. 2021. Communication From the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions Empty: Fostering a European approach to Artificial Intelligence. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2021:205:FIN
[37]
Ernst Fehr and Simon Gächter. 2002. Altruistic punishment in humans. Nature 415, 6868 (2002), 137–140.
[38]
Luciano Floridi. 2019. Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32, 2 (2019), 185–193.
[39]
Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, 2018. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines 28, 4 (2018), 689–707.
[40]
Matija Franklin, Edmond Awad, and David Lagnado. 2021. Blaming automated vehicles in difficult situations. Iscience 24, 4 (2021), 102252.
[41]
Caleb Furlough, Thomas Stokes, and Douglas J Gillan. 2019. Attributing blame to robots: I. The influence of robot autonomy. Human factors (2019), 0018720819880641.
[42]
Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine 38, 3 (2017), 50–57.
[43]
John-Stewart Gordon. 2020. Artificial moral and legal personhood. AI & Society (2020), 1–15.
[44]
David J Gunkel. 2017. Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology(2017), 1–14.
[45]
F Allan Hanson. 2009. Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and information technology 11, 1 (2009), 91–99.
[46]
Kenneth Einar Himma. 2009. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?Ethics and Information Technology 11, 1 (2009), 19–29.
[47]
Maia Jacobs, Melanie F Pradier, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Translational psychiatry 11, 1 (2021), 1–9.
[48]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
[49]
Deborah G Johnson. 2006. Computer systems: Moral entities but not moral agents. Ethics and information technology 8, 4 (2006), 195–204.
[50]
Deborah G Johnson. 2015. Technology with no human responsibility?Journal of Business Ethics 127, 4 (2015), 707–715.
[51]
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615(2019).
[52]
Atoosa Kasirzadeh and Andrew Smart. 2021. The Use and Misuse of Counterfactuals in Ethical Machine Learning. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 228–236.
[53]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[54]
Lauren Kirchner. 2020. Can Algorithms Violate Fair Housing Laws?The Markup. https://themarkup.org/locked-out/2020/09/24/fair-housing-laws-algorithms-tenant-screenings.
[55]
Kirsten Korosec. 2015. Volvo CEo: we will Accept All Liability when our Cars Are in Autonomous Mode. http://fortune.com/2015/10/07/volvo-liability-self-driving-cars/.
[56]
Joshua A Kroll. 2021. Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 758–771.
[57]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
[58]
Minha Lee, Gale Lucas, Johnathan Mell, Emmanuel Johnson, and Jonathan Gratch. 2019. What’s on Your Virtual Mind? Mind Perception in Human-Agent Negotiations. In Proceedings of the 19th acm international conference on intelligent virtual agents. 38–45.
[59]
Gabriel Lima, Meeyoung Cha, Chihyung Jeon, and Kyung Sin Park. 2021. The Conflict Between People’s Urge to Punish AI and Legal Systems. Frontiers in Robotics and AI 8 (2021), 339. https://doi.org/10.3389/frobt.2021.756242
[60]
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha. 2021. Human perceptions on moral responsibility of AI: A case study in AI-assisted bail decision-making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[61]
Zachary C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31–57.
[62]
Peng Liu, Manqing Du, and Tingting Li. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics and information technology(2021), 1–14.
[63]
Bertram Malle. 2006. Intentionality, morality, and their relationship in human judgment. Journal of cognition and culture 6, 1-2 (2006), 87–112.
[64]
Bertram F Malle, Steve Guglielmo, and Andrew E Monroe. 2014. A theory of blame. Psychological Inquiry 25, 2 (2014), 147–186.
[65]
Bertram F Malle and Joshua Knobe. 1997. The folk concept of intentionality. Journal of experimental social psychology 33, 2 (1997), 101–121.
[66]
Bertram F Malle, Stuti Thapa Magar, and Matthias Scheutz. 2019. AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and well-being. Springer, 111–133.
[67]
Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117–124.
[68]
Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru Baykara, and Agnieszka Wykowska. 2019. Do we adopt the intentional stance toward humanoid robots?Frontiers in psychology 10 (2019), 450.
[69]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 (2004), 175–183.
[70]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[71]
Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
[72]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279–288.
[73]
Satya Nadella. 2016. The Partnership of the Future. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html.
[74]
Sven Nyholm. 2018. Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci. Science and engineering ethics 24, 4 (2018), 1201–1219.
[75]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
[76]
Frank Pasquale. 2015. The black box society. Harvard University Press.
[77]
Jairo Perez-Osorio and Agnieszka Wykowska. 2020. Adopting the intentional stance toward natural and artificial agents. Philosophical Psychology 33, 3 (2020), 369–395.
[78]
William Lloyd Prosser 1941. Handbook of the Law of Torts. Vol. 4. West Publishing.
[79]
Anaïs Rességuier and Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society 7, 2 (2020), 2053951720942541.
[80]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[81]
Scott Robbins. 2019. A misdirected principle with a catch: explicability for AI. Minds and Machines 29, 4 (2019), 495–514.
[82]
Alan Rubel, Adam Pham, and Clinton Castro. 2019. Agency Laundering and Algorithmic Decision Systems. In International Conference on Information. Springer, 590–598.
[83]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206–215.
[84]
Henrik Skaug Sætra. 2021. Confounding complexity of machine action: a hobbesian account of machine responsibility. International Journal of Technoethics (IJT) 12, 1 (2021), 87–100.
[85]
Filippo Santoni de Sio and Jeroen Van den Hoven. 2018. Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI 5 (2018), 15.
[86]
Thomas M Scanlon. 2009. Moral dimensions. Harvard University Press.
[87]
Andrew D Selbst and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham L. Rev. 87(2018), 1085.
[88]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.
[89]
David Shoemaker. 2011. Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics 121, 3 (2011), 602–632.
[90]
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, 2017. Mastering the game of Go without human knowledge. Nature 550, 7676 (2017), 354–359.
[91]
Sheikh M Solaiman. 2017. Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artificial Intelligence and Law 25, 2 (2017), 155–179.
[92]
Robert Sparrow. 2007. Killer robots. Journal of applied philosophy 24, 1 (2007), 62–77.
[93]
Bernd Carsten Stahl. 2006. Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology 8, 4 (2006), 205–213.
[94]
Steve Torrance. 2008. Ethics and consciousness in artificial agents. AI & Society 22, 4 (2008), 495–521.
[95]
Jacob Turner. 2018. Robot rules: Regulating artificial intelligence. Springer.
[96]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 10–19.
[97]
Ibo Van de Poel. 2011. The relation between forward-looking and backward-looking responsibility. In Moral Responsibility. Springer, 37–52.
[98]
Ibo Van de Poel. 2015. Moral responsibility. In Moral responsibility and the problem of many hands. Routledge, 24–61.
[99]
Aimee van Wynsberghe. 2021. Responsible Robotics and Responsibility Attribution. Robotics, AI, and Humanity: Science, Ethics, and Policy (2021), 239.
[100]
Michael Veale and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22, 4 (2021), 97–112.
[101]
Suresh Venkatasubramanian and Mark Alfano. 2020. The philosophical basis of algorithmic recourse. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 284–293.
[102]
Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596(2020).
[103]
David C Vladeck. 2014. Machines without principals: liability rules and artificial intelligence. Wash. L. Rev. 89(2014), 117.
[104]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31(2017), 841.
[105]
Julie Weed. 2021. Résumé-Writing Tips to Help You Get Past the A.I. Gatekeepers. New York Times. https://www.nytimes.com/2021/03/19/business/resume-filter-articial-intelligence.html.

Cited By

View all
  • (2025)Explainable deep learning for diabetes diagnosis with DeepNetX2Biomedical Signal Processing and Control10.1016/j.bspc.2024.10690299(106902)Online publication date: Jan-2025
  • (2024)Meaningful human control and variable autonomy in human-robot teams for firefightingFrontiers in Robotics and AI10.3389/frobt.2024.132398011Online publication date: 1-Feb-2024
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
  • Show More Cited By

Index Terms

  1. The Conflict Between Explainable and Accountable Decision-Making Algorithms
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Please enable JavaScript to view thecomments powered by Disqus.

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
            June 2022
            2351 pages
            ISBN:9781450393522
            DOI:10.1145/3531146
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 20 June 2022

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. AI
            2. Accountability
            3. Algorithms
            4. Artificial Intelligence
            5. Blame
            6. Decision-Making
            7. Designers
            8. Explainability
            9. Patients
            10. Responsibility
            11. Users

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Funding Sources

            Conference

            FAccT '22
            Sponsor:

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)312
            • Downloads (Last 6 weeks)26
            Reflects downloads up to 19 Nov 2024

            Other Metrics

            Citations

            Cited By

            View all
            • (2025)Explainable deep learning for diabetes diagnosis with DeepNetX2Biomedical Signal Processing and Control10.1016/j.bspc.2024.10690299(106902)Online publication date: Jan-2025
            • (2024)Meaningful human control and variable autonomy in human-robot teams for firefightingFrontiers in Robotics and AI10.3389/frobt.2024.132398011Online publication date: 1-Feb-2024
            • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
            • (2024)Being Accountable is Smart: Navigating the Technical and Regulatory Landscape of AI-based Services for Power GridProceedings of the 2024 International Conference on Information Technology for Social Good10.1145/3677525.3678651(118-126)Online publication date: 4-Sep-2024
            • (2024)Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI ResearchProceedings of the ACM on Human-Computer Interaction10.1145/36410098:CSCW1(1-43)Online publication date: 26-Apr-2024
            • (2024)Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau’s Adoption of Differential PrivacyProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658962(1150-1162)Online publication date: 3-Jun-2024
            • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658951(1002-1013)Online publication date: 3-Jun-2024
            • (2024)Exploring the Association between Moral Foundations and Judgements of AI BehaviourProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642712(1-15)Online publication date: 11-May-2024
            • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
            • (2024)Impact of generative artificial intelligence models on the performance of citizen data scientists in retail firmsComputers in Industry10.1016/j.compind.2024.104128161(104128)Online publication date: Oct-2024
            • Show More Cited By

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media