Abstract
Despite the bombardment of AI ethics frameworks (AIEFs) published in the last decade, it is unclear which of the many have been adopted in the industry. What is more, the sheer volume of AIEFs without a clear demonstration of their effectiveness makes it difficult for businesses to select which framework they should adopt. As a first step toward addressing this problem, we employed four different existing frameworks to assess AI ethics concerns of a real-world AI system. We compared the experience of applying the AIEFs from the perspective of (a) a third-party auditor conducting an AI ethics risk assessment for the company, and (b) the company receiving the audit outcomes. Our results suggest that the feel-good factor of doing an assessment is common across the AIEFs that can take anywhere between 1.5 and 20 h to complete. However, each framework provides different benefits (e.g., issue discovery vs. issue monitoring) and is likely best used in conjunction with one another at different stages of an AI development process. As such, we call on the AI ethics community to better specify the suitability and expected benefits of existing frameworks to enable better adoption of AI ethics practice in the industry.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability statement
The datasets generated during and/or analysed during the current study are not publicly available due to the condition of confidentiality with which the human participant data (e.g., interviews) was collected. However, the reports resulting from the AI ethics assessments we conducted are available from the corresponding author on reasonable request.
References
Adadi A, Berrada M (2018) Peeking inside the Black-Box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6:52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
Adams RJ, Smart P, Huff AS (2017) Shades of grey: guidelines for working with the grey literature in systematic reviews for management and organizational studies: shades of grey. Int J Manag Rev 19:432–454. https://doi.org/10.1111/ijmr.12102
ADAPT Centre for Digital Content Technology The Ethics Canvas. https://www.ethicscanvas.org/canvas/index.php. Accessed 17 Mar 2022
Amershi S, Weld D, Vorvoreanu M et al (2019) Guidelines for human–AI interaction. In: Proceedings of the 2019 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, pp 1–13
Anderson D, Bonaguro J, McKinney M et al (2018) Ethics & Algorithms Toolkit (beta). https://ethicstoolkit.ai/. Accessed 17 Mar 2022
Arya V, Bellamy RKE, Chen P-Y, et al (2019) One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. ArXiv190903012 Cs Stat
Ayling J, Chapman A (2021) Putting AI ethics to work: are the tools fit for purpose? AI Ethics. https://doi.org/10.1007/s43681-021-00084-x
Baxter P, Jack S (2008) Qualitative case study methodology: study design and implementation for novice researchers. Qual Rep 13:544–559. https://doi.org/10.46743/2160-3715/2008.1573
Bellamy RKE, Dey K, Hind M et al (2018) AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. ArXiv181001943 Cs
Benjamens S, Dhunnoo P, Meskó B (2020) The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. Npj Digit Med 3:1–8. https://doi.org/10.1038/s41746-020-00324-0
Blackman R (2020) A practical guide to building ethical AI. Harvard Business Review. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai. Accessed 15 Oct 2021
Bogina V, Hartman A, Kuflik T, Shulner-Tal A (2021) Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics. Int J Artif Intell Educ. https://doi.org/10.1007/s40593-021-00248-0
BSA (2021) Confronting bias: BSA’s framework to build trust in AI. In: BSA Artif. Intell. BSA Artif. Intell. Policy Overv. https://ai.bsa.org/confronting-bias-bsas-framework-to-build-trust-in-ai/. Accessed 29 Aug 2022
CDC (2019) Health Insurance Portability and Accountability Act of 1996 (HIPAA) | CDC. https://www.cdc.gov/phlp/publications/topic/hipaa.html. Accessed 17 Mar 2022
Cohen IG, Mello MM (2018) HIPAA and protecting health information in the 21st century. JAMA 320:231–232. https://doi.org/10.1001/jama.2018.5630
de Swarte T, Boufous O, Escalle P (2019) Artificial intelligence, ethics and human values: the cases of military drones and companion robots. Artif Life Robot 24:291–296. https://doi.org/10.1007/s10015-019-00525-1
Etikan I (2016) Comparison of convenience sampling and purposive sampling. Am J Theor Appl Stat 5:1. https://doi.org/10.11648/j.ajtas.20160501.11
FDA (2021) Artificial intelligence and machine learning in software as a medical device. FDA
Floridi L, Cowls J (2022) A Unified Framework of Five Principles for AI in Society. In: Carta S (ed) Machine Learning and the City, Wiley & Sons, p 535–545. https://doi.org/10.1002/9781119815075.ch45
Floridi L, Cowls J, Beltrametti M et al (2018) AI4People—an ethical framework for a good AI Society: opportunities, risks, principles, and recommendations. Minds Mach 28:689–707. https://doi.org/10.1007/s11023-018-9482-5
Generation R Consulting (2018) Ethics analysis of predictive algorithms: an assessment report for technical safety BC. In: Tech. Saf. BC. https://www.technicalsafetybc.ca/ethics-analysis-predictive-algorithms-assessment-report-technical-safety-bc. Accessed 5 June 2022
Gerke S, Minssen T, Cohen G (2020) Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc. https://doi.org/10.1016/B978-0-12-818438-7.00012-5
Goodwin GP, Darley JM (2010) The perceived objectivity of ethical beliefs: psychological findings and implications for public policy. Rev Philos Psychol 1:161–188. https://doi.org/10.1007/s13164-009-0013-4
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Minds Mach 30:99–120. https://doi.org/10.1007/s11023-020-09517-8
Harrison G, Hanson J, Jacinto C, et al (2020) An empirical study on the perceived fairness of realistic, imperfect machine learning models. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. ACM, Barcelona, Spain, pp 392–402
Health Canada (2004) Medical devices. https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices.html. Accessed 30 Aug 2022
Health Canada (2021) Health Canada’s action plan on medical devices: continuously improving safety, effectiveness and quality. https://www.canada.ca/en/health-canada/services/publications/drugs-health-products/medical-devices-action-plan.html. Accessed 5 June 2022
High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI
IBM (2020) Precision regulation for artificial intelligence. In: THINKPolicy Blog. https://www.ibm.com/blogs/policy/ai-precision-regulation/. Accessed 29 Aug 2022
IBM (2022) AI ethics. https://www.ibm.com/artificial-intelligence/ethics. Accessed 29 Aug 2022
Institute for the Future Ethical OS. https://ethicalos.org/. Accessed 5 June 2022
Jobin A, Ienca M, Vayena E (2019) Artificial Intelligence: the global landscape of ethics guidelines. Nat Mach Intell 1:389–399. https://doi.org/10.1038/s42256-019-0088-2
Katell M, Young M, Dailey D, et al (2020) Toward situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, NY, USA, pp 45–55
Knight CG (2001) Human–environment relationship: comparative case studies. In: Smelser NJ, Baltes PB (eds) International encyclopedia of the social and behavioral sciences. Pergamon, Oxford, pp 7039–7045
Madaio MA, Stark L, Wortman Vaughan J, Wallach H (2020) Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, pp 1–14
Microsoft (2022) Artificial intelligence solutions and services. In: Microsoft AI. https://www.microsoft.com/en-us/ai. Accessed 17 Mar 2022
Montréal Declaration Responsible AI (2017) Montréal declaration for a responsible development of artificial intelligence. https://www.montrealdeclaration-responsibleai.com/. Accessed 31 Oct 2022
Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26:2141–2168. https://doi.org/10.1007/s11948-019-00165-5
Musschenga AW (2005) Empirical ethics, context-sensitivity, and contextualism. J Med Philos Forum Bioeth Philos Med 30:467–490. https://doi.org/10.1080/03605310500253030
Noble S (2018) Algorithms of oppression: how search engines reinforce racism. New York University Press, New York
Office of the Privacy Commissioner of Canada (2018) PIPEDA in brief. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/. Accessed 17 Mar 2022
Open Roboethics Institute (2019) Foresight into AI Ethics Toolkit
Peters D, Vold K, Robinson D, Calvo RA (2020) Responsible AI—two frameworks for ethical design practice. IEEE Trans Technol Soc 1:34–47. https://doi.org/10.1109/TTS.2020.2974991
Ribeiro MTC (2022) lime
Ribeiro MT, Singh S, Guestrin C (2016) “Why Should I Trust You?”: Explaining the predictions of any classifier. arXiv
Santa Clara University An Ethical Toolkit for Engineering/Design Practice. https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/. Accessed 17 Mar 2022
Schiff D, Rakova B, Ayesh A et al (2021) Explaining the principles to practices gap in AI. IEEE Technol Soc Mag 40:81–94. https://doi.org/10.1109/MTS.2021.3056286
Theodorou A, Dignum V (2020) Towards ethical and socio-legal governance in AI. Nat Mach Intell 2:10–12. https://doi.org/10.1038/s42256-019-0136-y
Treasury Board of Canada (2021) Algorithmic Impact Assessment Tool. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html. Accessed 16 Mar 2022
Tricot R (2021) Venture capital investments in artificial intelligence: analysing trends in VC in AI companies from 2012 through 2020. OECD, Paris
Vakkuri V, Kemell K-K (2019) Implementing AI ethics in practice: an empirical evaluation of the RESOLVEDD strategy. In: Hyrynsalmi S, Suoranta M, Nguyen-Duc A (eds) Software business. Springer International Publishing, Cham, pp 260–275
Vakkuri V, Kemell K-K, Jantunen M, Abrahamsson P (2020a) “This is Just a Prototype”: how ethics are ignored in software startup-like environments. In: Stray V, Hoda R, Paasivaara M, Kruchten P (eds) agile processes in software engineering and extreme programming. Springer International Publishing, Cham, pp 195–210
Vakkuri V, Kemell K-K, Kultanen J, Abrahamsson P (2020b) The current state of industrial practice in artificial intelligence ethics. IEEE Softw 37:50–57. https://doi.org/10.1109/MS.2020.2985621
Wachter S, Mittelstadt B (2019) A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Bus Law Rev 2019:494–620. https://doi.org/10.7916/cblr.v2019i2.3424
Whittlestone J, Nyrup R, Alexandrova A, Cave S (2019) The role and limits of principles in AI ethics: towards a focus on tensions. In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and Society. Association for Computing Machinery, New York, NY, USA, pp 195–200
WHO (2021) Ethics and governance of artificial intelligence for health. https://www.who.int/publications-detail-redirect/9789240029200. Accessed 18 Mar 2022
Yin RK (2003) Case study research: design and methods, 3rd edn. Sage Publications, Thousand Oaks
Yin RK (2009) Case study research: design and methods. SAGE, Thousand Oaks
Zuloaga L (2021) Industry leadership: new audit results and decision on visual analysis. In: hirevue.com. https://www.hirevue.com/blog/hiring/industry-leadership-new-audit-results-and-decision-on-visual-analysis. Accessed 5 June 2022
Acknowledgements
We acknowledge the financial support of NSERC [Grant no. G13031], McGill University, and Arts Research Internship Awards (ARIA) by the Arts Internship Office of McGill University to conduct this study. The authors are grateful to all the participants who participated in the study.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Qiang, V., Rhim, J. & Moon, A. No such thing as one-size-fits-all in AI ethics frameworks: a comparative case study. AI & Soc 39, 1975–1994 (2024). https://doi.org/10.1007/s00146-023-01653-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-023-01653-w