Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3514094.3534181acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

Published: 27 July 2022 Publication History

Abstract

Much attention has focused on algorithmic audits and impact assessments to hold developers and users of algorithmic systems accountable. But existing algorithmic accountability policy approaches have neglected the lessons from non-algorithmic domains: notably, the importance of third parties. Our paper synthesizes lessons from other fields on how to craft effective systems of external oversight for algorithmic deployments. First, we discuss the challenges of third party oversight in the current AI landscape. Second, we survey audit systems across domains - e.g., financial, environmental, and health regulation - and show that the institutional design of such audits are far from monolithic. Finally, we survey the evidence base around these design components and spell out the implications for algorithmic auditing. We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability, and sustained focus on institutional design will be required for meaningful third party involvement.

References

[1]
2020. Sandvig v. Barr., 73 pages.
[2]
2020. Van Buren v. United States., 2667 pages.
[3]
2021. Impact. https://themarkup.org/series/impact Last accessed 25 October 2021.
[4]
Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through optimization: How Facebook's Ad delivery can lead to biased outcomes. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1--30.
[5]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23, 2016 (2016), 139--159.
[6]
Ann H. Baier and Lisa Ahramjian. 2020. Organic Certification of Farms and Businesses Producing Agricultural Products., 8 pages.
[7]
Jean Bédard and Yves Gendron. 2010. Strengthening the financial reporting system: can audit committees deliver? International journal of auditing 14, 2 (2010), 174--210.
[8]
Riccardo Boffo and Robert Patalano. 2020. Esg investing: Practices, progress and challenges. Éditions OCDE, Paris (2020).
[9]
Russell Brandom. 2021. Facebook shut down German research on Instagram algorithm, researchers say.
[10]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77--91.
[11]
Yi-Jung Chen, Chi-Ming Chiou, Yu-Wen Huang, Pei-Weng Tu, Yung-Chuan Lee, and Chia-Hung Chien. 2018. A comparative study of medical device regulations: US, Europe, Canada, and Taiwan. Therapeutic innovation & regulatory science 52, 1 (2018), 62--69.
[12]
Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency. PMLR, 134--148.
[13]
Rumman Chowdhury and Jutta Williams. 2021. Introducing Twitter's first algorithmic bias bounty challenge. https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge Last accessed 2 November 2021.
[14]
Craig Cochran. 2015. ISO 9001: 2015 in plain English. Paton Professional.
[15]
Amanda Coston, Neel Guha, Derek Ouyang, Lisa Lu, Alexandra Chouldechova, and Daniel E Ho. 2021. Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 173--184.
[16]
Ángel Díaz and Laura Hecht. 2021. Double Standards in Social Media Content Moderation. (2021).
[17]
Ilija Djekic, Sladjana Dragojlovic, Zoran Miloradovic, Snezana Miljkovic-Zivanovic, Marija Savic, and Vesna Kekic. 2016. Improving the confectionery industry supply chain through second party audits. British Food Journal (2016).
[18]
Christoph Ebell, Ricardo Baeza-Yates, Richard Benjamins, Hengjin Cai, Mark Coeckelbergh, Tania Duarte, Merve Hickok, Aurelie Jacquet, Angela Kim, Joris Krijger, et al. 2021. Towards intellectual freedom in an AI Ethics Global Community. AI and Ethics 1, 2 (2021), 131--138.
[19]
Robert G Eccles, Michael P Krzus, Jean Rogers, and George Serafeim. 2012. The need for sector-specific materiality and sustainability reporting standards. Journal of applied corporate finance 24, 2 (2012), 65--71.
[20]
Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook, and Julie Ciccolini. 2019. Layers of bias: A unified approach for understanding problems with risk assessment. Criminal Justice and Behavior 46, 2 (2019), 185--209.
[21]
Alex C Engler. 2021. Independent auditors are struggling to hold AI companies accountable. FastCompany.
[22]
David Freeman Engstrom and Daniel E Ho. 2020. Algorithmic accountability in the administrative state. Yale J. on Reg. 37 (2020), 800.
[23]
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961 (2021).
[24]
Foxglove. 2020. Home Office says it will abandon its racist visa algorithm -- after we sued them. https://www.foxglove.org.uk/2020/08/04/home-office-says-itwill-abandon-its-racist-visa-algorithm-after-we-sued-them/ Last accessed 25 October 2021.
[25]
Foxglove. 2020. We put a stop to the A Level grading algorithm! https://www.foxglove.org.uk/2020/08/17/we-put-a-stop-to-the-a-level-gradingalgorithm/ Last accessed 25 October 2021.
[26]
Patrick Grother, Mei Ngan, and Kayee Hanaoka. 2019. Face recognition vendor test (fvrt): Part 3, demographic effects. National Institute of Standards and Technology.
[27]
Karen Hao. 2020. We read the paper that forced Timnit Gebru out of Google. Here's what it says. Retrieved January 21 (2020), 2021.
[28]
Karen Hao. 2021. How Facebook got addicted to spreading misinformation.
[29]
Caroline Haskins, Ryan Mac, and Logan McDonald. 2020. The ACLU Slammed A Facial Recognition Company That Scrapes Photos From Instagram And Facebook. Buzzfeed News (2020).
[30]
Rebecca Heilweil. 2020. Big tech companies back away from selling facial recognition to police. That's progress. Recode. Available at: https://www.vox.com/recode/2020/6/10/21287194/amazon-microsoft-ibm-facialrecognition-moratorium-police (accessed 18 June 2020) (2020).
[31]
Hirevue. 2021. HireVue Leads the Industry with Commitment to Transparent and Ethical Use of AI in Hiring.
[32]
ICO. 2020. AI Auditing Framework.
[33]
Thomas Kadri. 2020. Digital Gatekeepers. Texas Law Review 99 (2020).
[34]
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated interventions for algorithmic equity: lessons from the field. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 45--55.
[35]
Michael Katell, Meg Young, Bernease Herman, Dharma Dailey, Aaron Tam, Vivian Guetler, Corinne Binz, Daniella Raz, and PM Krafft. 2019. An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists. arXiv preprint arXiv:1912.02943 (2019).
[36]
Michael Kearns and Aaron Roth. 2020. Ethical algorithm design should guide technology regulation. Retrieved from Brookings: https://www.brookings.edu/research/ethical-algorithm-designshould-guide-technology-regulation (2020).
[37]
Soohun Kim and Aaron Yoon. 2020. Analyzing active managers' commitment to esg: Evidence from united nations principles for responsible investment. Available at SSRN 3555984 (2020).
[38]
Sara Kingsley, ClaraWang, Alex Mikhalenko, Proteeti Sinha, and Chinmay Kulkarni. 2020. Auditing digital platforms for discrimination in economic opportunity advertising. arXiv preprint arXiv:2008.09656 (2020).
[39]
Lauren Kirchner and Matthew Goldstein. 2020. Access Denied: Faulty Automated Background Checks Freeze Out Renters. (2020).
[40]
Ava Kofman and Ariana Tobin. 2019. Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement. ProPublica. Retrieved December 12 (2019), 2020.
[41]
Ava Kofman and Ariana Tobin. 2021. Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement. (2021).
[42]
Sakis Kotsantonis and George Serafeim. 2019. Four things no one will tell you about ESG data. Journal of Applied Corporate Finance 31, 2 (2019), 50--58.
[43]
Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing Toxic Content Classification for a Diversity of Perspectives. arXiv preprint arXiv:2106.04511 (2021).
[44]
Issie Lapowsky. 2021. Why Facebook's data-sharing project ballooned into a 2-year debacle. (2021).
[45]
Mark Latonero and Aaina Agarwal. 2021. Human Rights Impact Assessments for AI: Learning from Facebook's Failure in Myanmar. (2021), 18.
[46]
Alexandra Levine. 2021. 'Chilling': Facial recognition firm Clearview AI hits watchdog groups with subpoenas.
[47]
Ada Lovelace and UK DataKind. 2020. Examining the black box: Tools for assessing algorithmic systems. Technical Report. Technical report, AdaLovelace Institute, https://ico.org.uk/media/about.
[48]
Mark MacCarthy. 2019. An Examination of the Algorithmic Accountability Act of 2019. Available at SSRN 3615731 (2019).
[49]
Markey and Matsui. 2021. Senator Markey, Rep. Matsui Introduce Legislation to Combat Harmful Algorithms and Create New Online Transparency Regime.
[50]
Roy Maurer. 2021. New York City to Require Bias Audits of AI-Type HR Technology.
[51]
KATHLEEN McGRORY and NEIL Bedi. 2020. Targeted. The Tampa Bay Times 3 (2020).
[52]
JB Merrill. 2020. Does facebook still sell discriminatory ads. The Markup (2020).
[53]
Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic impact assessments and accountability: The co-construction of impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 735--746.
[54]
Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501--507.
[55]
Emanuel Moss, Elizabeth AnneWatkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. Available at SSRN 3877437 (2021).
[56]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447--453.
[57]
Will Oremus. 2021. Facebook keeps researching its own harms - and burying the findings. (2021).
[58]
Barbara Ortutay. 2021. Facebook shuts out NYU academics' research on political ads.
[59]
Michael O'Leary and Warren Valdmanis. 2021. An ESG Reckoning Is Coming. Harvard Business Review (2021).
[60]
Nathaniel Persily. 2021. Facebook Hides Data Showing It Harms Users. Outside Scholars Need Access. (2021). Last accessed 25 October 2021.
[61]
Elizabeth Pollman. 2019. Corporate Social Responsibility, ESG, and Compliance. Forthcoming, Cambridge Handbook of Compliance (D. Daniel Sokol & Benjamin van Rooij eds.), Loyola Law School, Los Angeles Legal Studies Research Paper 2019--35 (2019).
[62]
Michael Power. 1997. The audit society: Rituals of verification. OUP Oxford.
[63]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 469--481.
[64]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 429--435.
[65]
Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 145--151.
[66]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33--44.
[67]
Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. 2018. Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute (2018), 1--22.
[68]
Rashida Richardson. 2021. Defining and Demystifying Automated Decision Systems. Maryland Law Review, Forthcoming (2021).
[69]
Rashida Richardson, A Cahn, A Kak, A Diaz, A Samant, et al. 2019. Confronting black boxes: A shadow report of the New York City automated decision system task force. AI Now Institute (2019).
[70]
Eva Rosen, Philip ME Garboden, and Jennifer E Cossyleon. 2021. Racial Discrimination in Housing: How Landlords Use Algorithms and Home Visits to Screen Tenants. American Sociological Review (2021), 00031224211029618.
[71]
Samantha Ross and Gordon Seymour. 2019. The Developing World of Assurance on Sustainability Reporting. unpublished manuscript.
[72]
Piotr Sapiezynski, Avijit Ghosh, Levi Kaplan, Alan Mislove, and Aaron Rieke. 2019. Algorithms that" Don't See Color": Comparing Biases in Lookalike and Special Ad Audiences. arXiv preprint arXiv:1912.07579 (2019).
[73]
AndrewD Selbst. 2021. An Institutional ViewOf Algorithmic Impact Assessments. (2021).
[74]
Nathan Sheard. 2021. Banning Government Use of Face Recognition Technology: 2020 Year in Review.
[75]
G Sherwin and E Bhandari. 2019. Facebook settles civil rights cases by making sweeping changes to its online ad platform.
[76]
Mona Sloane. 2021. The Algorithmic Auditing Trap.
[77]
Mona Sloane, Emanuel Moss, and Rumman Chowdhury. 2021. A Silicon Valley Love Triangle: Hiring Algorithms, Pseudo-Science, and the Quest for Auditability. arXiv preprint arXiv:2106.12403 (2021).
[78]
Jacob Snow. 2018. Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots. https://www.aclu.org/blog/privacy-technology/surveillance technologies/amazons-face-recognition-falsely-matched-28 Last accessed 25 October 2021.
[79]
Chandler Nicholle Spinks. 2019. Contemporary housing discrimination: Facebook, targeted advertising, and the Fair Housing Act. Hous. L. Rev. 57 (2019), 925.
[80]
Latanya Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM 56, 5 (2013), 44--54.
[81]
Nicholas P Tatonetti, Guy Haskin Fernald, and Russ B Altman. 2012. A novel signal detection algorithm for identifying hidden drug-drug interactions in adverse event reports. Journal of the American Medical Informatics Association 19, 1 (2012), 79--85.
[82]
Ariana Tobin. 2019. HUD sues Facebook over housing discrimination and says the company's algorithms have made the problem worse. ProPublica (March 28, 2019). Available at https://www.propublica.org/article/hud-sues-facebookhousingdiscrimination-advertising-algorithms (last accessed April 29, 2019) (2019).
[83]
Daniel Trielli, Jennifer Stark, and Nicholas Diakopoulos. 2017. Algorithm tips: A resource for algorithmic accountability in Government. In Proc. Computation+ Journalism Symposium.
[84]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decisionmaking. In Proceedings of the 2018 chi conference on human factors in computing systems. 1--14.
[85]
James Vincent. 2021. Google is using AI to help users explore the topics they're searching for - here's how. https://www.theverge.com/2021/9/29/22696268/google-search-on-updates-ai-mum-explained Last accessed 2 November 2021.
[86]
Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10 (2017), 3152676.
[87]
Kyle Wiggers. 2021. Google trained a trillion-parameter AI language model. https://venturebeat.com/2021/01/12/google-trained-a-trillion-parameterai-language-model/ Last accessed 2 November 2021.
[88]
Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 666--677.
[89]
Eric Wu, Kevin Wu, Roxana Daneshjou, David Ouyang, Daniel E Ho, and James Zou. 2021. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine 27, 4 (2021), 582--584.

Cited By

View all
  • (2025)Who Wants to Be Hired by AI? How Message Frames and AI Transparency Impact Individuals’ Attitudes and Behaviors toward Companies Using AI in HiringComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2025.100120(100120)Online publication date: Jan-2025
  • (2024)Research on the Application of AI Technology in AuditingEconomic Management & Global Business Studies10.69610/j.emgbs.202408313:1(1-19)Online publication date: 31-Aug-2024
  • (2024)VERGİ DENETİMİNİ REVİZE ETMEK: ALGORİTMİK KARAR ALMA SÜREÇLERİNDE ÜÇÜNCÜ TARAF KONTROLÖRÜ OLARAK İNSAN FAKTÖRÜNÜN İNCELENMESİDenetişim10.58348/denetisim.1540801(47-58)Online publication date: 1-Dec-2024
  • Show More Cited By

Index Terms

  1. Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
    July 2022
    939 pages
    ISBN:9781450392471
    DOI:10.1145/3514094
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. accountability
    2. algorithms
    3. auditing
    4. policy
    5. society

    Qualifiers

    • Research-article

    Funding Sources

    • Mozilla Foundation

    Conference

    AIES '22
    Sponsor:
    AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
    May 19 - 21, 2021
    Oxford, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 61 of 162 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2,140
    • Downloads (Last 6 weeks)222
    Reflects downloads up to 23 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Who Wants to Be Hired by AI? How Message Frames and AI Transparency Impact Individuals’ Attitudes and Behaviors toward Companies Using AI in HiringComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2025.100120(100120)Online publication date: Jan-2025
    • (2024)Research on the Application of AI Technology in AuditingEconomic Management & Global Business Studies10.69610/j.emgbs.202408313:1(1-19)Online publication date: 31-Aug-2024
    • (2024)VERGİ DENETİMİNİ REVİZE ETMEK: ALGORİTMİK KARAR ALMA SÜREÇLERİNDE ÜÇÜNCÜ TARAF KONTROLÖRÜ OLARAK İNSAN FAKTÖRÜNÜN İNCELENMESİDenetişim10.58348/denetisim.1540801(47-58)Online publication date: 1-Dec-2024
    • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693800(42543-42557)Online publication date: 21-Jul-2024
    • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693397(32691-32710)Online publication date: 21-Jul-2024
    • (2024)Assessing dual use risks in AI research: necessity, challenges and mitigation strategiesResearch Ethics10.1177/17470161241267782Online publication date: 30-Jul-2024
    • (2024)"Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AIProceedings of the ACM on Human-Computer Interaction10.1145/36869638:CSCW2(1-22)Online publication date: 8-Nov-2024
    • (2024)Integrating Equity in Public Sector Data-Driven Decision Making: Exploring the Desired Futures of Underserved StakeholdersProceedings of the ACM on Human-Computer Interaction10.1145/36869058:CSCW2(1-39)Online publication date: 8-Nov-2024
    • (2024)Collaboratively Designing and Evaluating Responsible AI InterventionsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3687136(658-662)Online publication date: 11-Nov-2024
    • (2024)Improving Group Fairness Assessments with ProxiesACM Journal on Responsible Computing10.1145/36771751:4(1-21)Online publication date: 24-Jul-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media