Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3630106.3658910acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment

Published: 05 June 2024 Publication History

Abstract

As more algorithmic systems have come under scrutiny for their potential to inflict societal harms, an increasing number of organizations that hold power over harmful algorithms have chosen, or were required under the law, to abandon them. While social movements and calls to abandon harmful algorithms have emerged across application domains, little academic attention has been paid to studying abandonment as a means to mitigate algorithmic harms. In this paper, we take a first step towards conceptualizing “algorithm abandonment” as an organization’s decision to stop designing, developing, or using an algorithmic system due to its (potential) harms. We conduct a thematic analysis of real-world cases of algorithm abandonment to characterize the dynamics leading to this outcome. Our analysis of 40 cases reveals that campaigns to abandon an algorithm follow a common process of six iterative phases: discovery, diagnosis, dissemination, dialogue, decision, and death, which we term the 6 D’s of abandonment. In addition, we highlight key factors that facilitate (or prohibit) abandonment, which include characteristics of both the technical and social systems that the algorithm is embedded within. We discuss implications for several stakeholders, including proprietors and technologists who have the power to influence an algorithm’s (dis)continued use, FAccT researchers, and policymakers.

References

[1]
AlgorithmWatch 2020. How Dutch activists got an invasive fraud detection algorithm banned. AlgorithmWatch. Retrieved January 16, 2024 from https://algorithmwatch.org/en/syri-netherlands-algorithm/Email newsletter.
[2]
Logic(s) Magazine 2020. Safe or Just Surveilled?: Tawana Petty on the Fight Against Facial Recognition Surveillance. Logic(s) Magazine. Retrieved January 16, 2024 from https://logicmag.io/security/safe-or-just-surveilled-tawana-petty-on-facial-recognition/Interview.
[3]
Stop LAPD Spying Coalition (Ed.). 2021. The Ghosts of White Supremacy in AI Reform. https://ainowinstitute.org/publication/a-new-ai-lexicon-surveillance
[4]
Amnesty International (Ed.). 2022. Ban the Scan. Retrieved January 16, 2024 from https://banthescan.amnesty.org/
[5]
Eticas 2022. The External Audit of the VioGen System. Eticas. https://eticasfoundation.org/wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-System.pdf
[6]
2022. Galactica Demo — galactica.org. https://galactica.org/. [Accessed 22-01-2024].
[7]
The Associated Press (Ed.). 2022. Oregon is dropping an artificial intelligence tool used in child welfare system. Retrieved January 16, 2024 from https://www.npr.org/2022/06/02/1102661376/oregon-drops-artificial-intelligence-child-abuse-cases
[8]
AI Now Institute (Ed.). 2023. Algorithmic Accountability: Moving Beyond Audits. Retrieved January 16, 2024 from https://ainowinstitute.org/publication/algorithmic-accountability
[9]
Electronic Frontier Foundation 2023. FOIA How To. Electronic Frontier Foundation. Retrieved January 16, 2024 from https://www.eff.org/issues/transparency/foia-how-to
[10]
International Association of Privacy Associates 2023. Global AI Legislation Tracker. International Association of Privacy Associates. Retrieved January 16, 2024 from https://iapp.org/resources/article/global-ai-legislation-tracker/
[11]
2023. New liver transplant rules yield winners, losers as wasted organs reach record high — washingtonpost.com. https://www.washingtonpost.com/business/2023/03/21/liver-transplants-acuity-circle-policy/. [Accessed 22-01-2024].
[12]
Duane Morris Government Strategies (Ed.). 2023. Regulating Artificial Intelligence In Mental Health: States’ Attempts To Revolutionize Mental Health Services. Retrieved January 16, 2024 from https://statecapitallobbyist.com/artificial-intelligence-ai/regulating-artificial-intelligence-in-mental-health-services-states-attempts-to-revolutionize-mental-health-services/
[13]
The U.S. Federal Trade Commission 2023. Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards. The U.S. Federal Trade Commission. Retrieved January 16, 2024 from https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-withoutPress Release.
[14]
Common Cause & Lokniti – Centre for the Study Developing Societies (CSDS) 2023. Status of Policing in India Report 2023: Surveillance and the Question of Privacy. Common Cause & Lokniti – Centre for the Study Developing Societies (CSDS). Retrieved January 16, 2024 from https://www.commoncause.in/wotadmin/upload/REPORT_2023.pdf
[15]
Lucy Parsons Labs 2023. Who We Are. Lucy Parsons Labs. Retrieved January 16, 2024 from https://lucyparsonslabs.com/about/
[16]
Parag Agrawal and Dantley Davis. 2020. Transparency around image cropping and changes to come. https://blog.twitter.com/en_us/topics/product/2020/transparency-image-cropping
[17]
Sarah Ahmed. 2017. No. Retrieved January 16, 2024 from https://feministkilljoys.com/2017/06/30/no/
[18]
AIAAIC. 2023. AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository. Retrieved January 18, 2024 from https://www.aiaaic.org/aiaaic-repository
[19]
Ali Alkhatib and Michael Bernstein. 2019. Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300760
[20]
Nazanin Andalibi, Cassidy Pyle, Kristen Barta, Lu Xian, Abigail Z. Jacobs, and Mark S. Ackerman. 2023. Conceptualizing Algorithmic Stigmatization. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany,) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 373, 18 pages. https://doi.org/10.1145/3544548.3580970
[21]
Laboratorio de Inteligencia Artificial Aplicada. 2018. Sobre la predicción automática de embarazos adolescentes.
[22]
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arxiv:1909.03012 [cs.AI]
[23]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023. Fairness and Machine Learning: Limitations and Opportunities. MIT Press.
[24]
Eric P.S. Baumer and M. Six Silberman. 2011. When the Implication is Not to Design (Technology). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (, Vancouver, BC, Canada,) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2271–2274. https://doi.org/10.1145/1978942.1979275
[25]
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. 2018. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arxiv:1810.01943 [cs.AI]
[26]
Emily Bembeneck, Rebecca Nissan, and Ziad Obermeyer. 2021. Algorithmic Bias Playbook. https://www.ftc.gov/system/files/documents/public_events/1582978/algorithmic-bias-playbook.pdf
[27]
Ruha Benjamin. 2016. Informed Refusal: Toward a Justice-based Bioethics. Science, Technology, & Human Values 41, 6 (2016), 967–990. https://doi.org/10.1177/0162243916656059 arXiv:https://doi.org/10.1177/0162243916656059
[28]
Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new Jim code. Polity.
[29]
Johana Bhuiyan. 2021. LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws. Retrieved January 16, 2024 from https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
[30]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa arXiv:https://www.tandfonline.com/doi/pdf/10.1191/1478088706qp063oa
[31]
Rachel Bukowitz and Tim O’Loughlin. 2022. Governing by Algorithm? Child Protection in Aotearoa New Zealand. https://anzsog.edu.au/research-insights-and-resources/research/governing-by-algorithm-child-protection-in-aotearoa-new-zealand/
[32]
bundesverfassungsgericht. 2023. Bundesverfassungsgericht - Press - Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutional. https://www.bundesverfassungsgericht.de/SharedDocs/Pressemitteilungen/EN/2023/bvg23-018.html
[33]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
[34]
Joy Buolamwini and Timnit Gebru. 2023. Algorithmic Justice League Gender Shades 5th Anniversary Celebration. (2023). https://www.youtube.com/watch?v=8JSxbZyivuE Virtual discussion (recorded).
[35]
Jenna Burrell and Deirdre Mulligan. 2020. The Berkeley Algorithmic Fairness and Opacity Group Refusal Conference. Retrieved January 16, 2024 from https://afog.berkeley.edu/programs/the-refusal-conference
[36]
Ryan Calo and Danielle Keats Citron. 2021. The Automated Administrative State: A Crisis of Legitimacy. Emory Law Journal (2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3553590
[37]
Zeph Capo and Janet Bass. 2017. Federal Suit Settlement: End of Value-Added Measures for Teacher Termination in Houston | American Federation of Teachers. https://www.aft.org/press-release/federal-suit-settlement-end-value-added-measures-teacher-termination-houston
[38]
Nathalie Casemajor, Stéphane Couture, Mauricio Delfin, Matthew Goerzen, and Alessandro Delfanti. 2015. Non-participation in digital media: toward a framework of mediated political action. Media, Culture & Society 37, 6 (2015), 850–866. https://doi.org/10.1177/0163443715584098 arXiv:https://doi.org/10.1177/0163443715584098
[39]
Cindy Chang. 2018. LAPD officials defend predictive policing as activists call for its end. Retrieved January 16, 2024 from https://www.latimes.com/local/lanow/la-me-lapd-data-policing-20180724-story.html
[40]
Kyle Chayka. 2023. Rethinking the Luddites in the Age of AI. Retrieved January 16, 2024 from https://www.newyorker.com/books/page-turner/rethinking-the-luddites-in-the-age-of-ai
[41]
Hao-Fei Cheng, Logan Stapleton, Anna Kawakami, Venkatesh Sivaraman, Yanghuidi Cheng, Diana Qing, Adam Perer, Kenneth Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA,) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 162, 22 pages. https://doi.org/10.1145/3491102.3501831
[42]
Lingwei Cheng and Alexandra Chouldechova. 2022. Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening. 6, CSCW2, Article 376 (nov 2022), 33 pages. https://doi.org/10.1145/3555101
[43]
Rumman Chowdhury. 2021. Sharing learnings about our image cropping algorithm. https://blog.twitter.com/engineering/en_us/topics/insights/2021/sharing-learnings-about-our-image-cropping-algorithm
[44]
M. Cifor, P. Garcia, T.L. Cowan, J. Rault, T. Sutherland, A. Chan, J. Rode, A.L. Hoffmann, N. Salehi, and L. Nakamura. 2019. Feminist Data Manifest-No. Retrieved January 16, 2024 from https://www.manifestno.com/
[45]
Richard Conniff. 2011. What the Luddites Really Fought Against. Retrieved January 16, 2024 from https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/
[46]
Jeffrey Dastin. [n. d.]. Insight - Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/. [Accessed 22-01-2024].
[47]
Jeffrey Dastin. 2018. Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved January 16, 2024 from https://www.reuters.com/article/idUSKCN1MK0AG/
[48]
Jeffrey Dastin. 2020. Rite Aid deployed facial recognition systems in hundreds of U.S. stores. Retrieved January 16, 2024 from https://www.reuters.com/investigates/special-report/usa-riteaid-software/
[49]
Jenny L. Davis, Apryl Williams, and Michael W. Yang. 2021. Algorithmic reparation. Big Data & Society 8, 2 (2021), 20539517211044808. https://doi.org/10.1177/20539517211044808 arXiv:https://doi.org/10.1177/20539517211044808
[50]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020). https://api.semanticscholar.org/CorpusID:211171909
[51]
Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, and Haiyi Zhu. 2022. Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 473–484. https://doi.org/10.1145/3531146.3533113
[52]
Alicia DeVrio, Motahhare Eslami, and Kenneth Holstein. 2023. Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm. In submission (2023).
[53]
Megan Rose Dickey. 2020. Twitter and Zoom’s algorithmic bias issues. https://techcrunch.com/2020/09/21/twitter-and-zoom-algorithmic-bias-issues/
[54]
Lynn Dombrowski, Ellie Harmon, and Sarah Fox. 2016. Social Justice-Oriented Interaction Design: Outlining Key Design Strategies and Commitments. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (Brisbane, QLD, Australia) (DIS ’16). Association for Computing Machinery, New York, NY, USA, 656–671. https://doi.org/10.1145/2901790.2901861
[55]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arxiv:1702.08608 [stat.ML]
[56]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (Cambridge, Massachusetts) (ITCS ’12). Association for Computing Machinery, New York, NY, USA, 214–226. https://doi.org/10.1145/2090236.2090255
[57]
Upol Ehsan, Ranjit Singh, Jacob Metcalf, and Mark Riedl. 2022. The Algorithmic Imprint. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1305–1317. https://doi.org/10.1145/3531146.3533186
[58]
Elizabeth Elizalde and Michael Gartland. 2019. Brooklyn tenants in rent-regulated apartments push state to nix landlord’s facial recognition software. Retrieved January 16, 2024 from https://www.nydailynews.com/2019/05/01/brooklyn-tenants-in-rent-regulated-apartments-push-state-to-nix-landlords-facial-recognition-software/
[59]
Ritchie Eppink. 2023. Testimony of Ritchie Eppink: AI in Government United States Senate Committee on Homeland Security & Government Affairs. Retrieved January 16, 2024 from https://www.hsgac.senate.gov/wp-content/uploads/Testimony-Eppink-2023-05-16-1.pdf
[60]
Thomas Erdbrink. 2021. Government in Netherlands Resigns After Benefit Scandal. Retrieved January 16, 2024 from https://www.nytimes.com/2021/01/15/world/europe/dutch-government-resignation-rutte-netherlands.html
[61]
Motahhare Eslami, Kristen Vaccaro, Karrie Karahalios, and Kevin Hamilton. 2017. “Be Careful; Things Can Be Worse than They Appear”: Understanding Biased Algorithms and Users’ Behavior Around Them in Rating Platforms. Proceedings of the International AAAI Conference on Web and Social Media 11, 1 (May 2017), 62–71. https://doi.org/10.1609/icwsm.v11i1.14898
[62]
Virginia Eubanks. 2018. A response to Allegheny County DHS. https://virginia-eubanks.com/2018/02/16/a-response-to-allegheny-county-dhs/
[63]
FairFare. 2023. FairFare: Unveiling Ridehail Fairness. Retrieved January 16, 2024 from https://getfairfare.org/
[64]
Katherine B. Forrest. 2021. When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence. World Scientific.
[65]
Batya Friedman, Peter Kahn, Alan Borning, Ping Zhang, and Dennis Galletta. 2006. Value Sensitive Design and Information Systems. https://doi.org/10.1007/978-94-007-7844-3_4
[66]
Yasmin Gagne. 2019. How we fought our landlord’s secretive plan for facial recognition—and won. Retrieved January 16, 2024 from https://www.fastcompany.com/90431686/our-landlord-wants-to-install-facial-recognition-in-our-homes-but-were-fighting-back
[67]
William Gavin. 2022. As privacy concerns arise, organizations using facial recognition technology spend on lobbying. https://www.opensecrets.org/news/2022/03/as-privacy-concerns-arise-organizations-using-facial-recognition-technology-continue-to-employ-lobbyists/
[68]
Albert Gehami and Leila Doty. 2023. When the Rubber Meets the Road: Experience Implementing AI Governance in a Public Agency with the City of San José. (2023). https://www.youtube.com/watch?v=Bif3fwI_d20 ACM FAccT Tutorial.
[69]
Marissa Gerchick, Tobi Jegede, Tarak Shah, Ana Gutierrez, Sophie Beiers, Noam Shemtov, Kath Xu, Anjana Samant, and Aaron Horowitz. 2023. The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1292–1310. https://doi.org/10.1145/3593013.3594081
[70]
Lauren Kaori Gurley. 2021. Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make. Retrieved January 16, 2024 from https://www.vice.com/en/article/88npjv/amazons-ai-cameras-are-punishing-drivers-for-mistakes-they-didnt-make
[71]
Rebecca Hanchett. 2022. Rhode Island Looks At Banning Facial Recognition By Sports Betting Apps. Retrieved January 16, 2024 from https://www.gamingtoday.com/news/rhode-island-banning-facial-recognition-sports-betting-apps/
[72]
Karen Hao. 2020. The two-year fight to stop Amazon from selling face recognition to the police. https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/
[73]
Bernard E. Harcourt. 2006. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. The Chicago University Press.
[74]
Moritz Hardt, Eric Price, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf
[75]
Caroline Haskins. 2019. Dozens of Cities Have Secretly Experimented With Predictive Policing Software. https://www.vice.com/en/article/d3m7jq/dozens-of-cities-have-secretly-experimented-with-predictive-policing-software
[76]
Caroline Haskins. 2021. The NYPD Has Misled The Public About Its Use Of Facial Recognition Tool Clearview AI. https://www.buzzfeednews.com/article/carolinehaskins1/nypd-has-misled-public-about-clearview-ai-use
[77]
Kashmir Hill. 2020. The Secretive Company That Might End Privacy as We Know It. Retrieved January 16, 2024 from https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
[78]
Kashmir Hill. 2020. Wrongfully Accused by an Algorithm. The New York Times (Jun 2020). https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
[79]
Sally Ho and Garance Burke. 2022. An algorithm that screens for child neglect raises concerns. Retrieved January 16, 2024 from https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
[80]
Sally Ho and Garnace Burke. 2023. Child welfare algorithm faces Justice Department scrutiny. https://apnews.com/article/justice-scrutinizes-pittsburgh-child-welfare-ai-tool-4f61f45bfc3245fd2556e886c2da988b, lastaccessed = January 16, 2024,
[81]
Sarah Homewood. 2019. Inaction as a Design Decision: Reflections on Not Designing Self-Tracking Tools for Menopause. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290607.3310430
[82]
Bonnie Honig. 2021. A Feminist Theory of Refusal. Harvard University Press.
[83]
Amanda Hoover. 2023. An Eating Disorder Chatbot Is Suspended for Giving Harmful Advice. Retrieved January 16, 2024 from https://www.wired.com/story/tessa-chatbot-suspended/
[84]
Lara Houston, Steven J. Jackson, Daniela K. Rosner, Syed Ishtiaque Ahmed, Meg Young, and Laewoo Kang. 2016. Values in Repair. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 1403–1414. https://doi.org/10.1145/2858036.2858470
[85]
Noura Howell, Audrey Desjardins, and Sarah Fox. 2021. Cracks in the Success Narrative: Rethinking Failure in Design Research through a Retrospective Trioethnography. ACM Trans. Comput.-Hum. Interact. 28, 6, Article 42 (nov 2021), 31 pages. https://doi.org/10.1145/3462447
[86]
https://www.theguardian.com/profile/justinmccurry. [n. d.]. South Korean AI chatbot pulled from Facebook after hate speech towards minorities — theguardian.com. https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook. [Accessed 22-01-2024].
[87]
Benefits Tech Advocacy Hub. 2022. Arkansas Medicaid Home and Community Based Services Hours Cuts. Retrieved January 16, 2024 from https://www.btah.org/case-study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html
[88]
Benefits Tech Advocacy Hub. 2022. Idaho Medicaid Home and Community Based Services Care Cuts. Retrieved January 16, 2024 from https://www.btah.org/case-study/idaho-medicaid-home-and-community-based-services-care-cuts.html
[89]
Benefits Tech Advocacy Hub. 2022. Missouri Medicaid Home and Community Based Services Eligibility Issues. Retrieved January 16, 2024 from https://www.btah.org/case-study/missouri-medicaid-home-and-community-based-services-eligibility-issues.html
[90]
Benefits Tech Advocacy Hub. 2022. Understanding the Lifecycle of Benefits Technology. Retrieved January 16, 2024 from https://www.btah.org/lifecycle.html
[91]
The Coalition Against Predictive Policing in Pittsburgh. 2020. Predictive Policing in Pittsburgh: A Primer. Retrieved January 16, 2024 from https://capp-pgh.com/files/Primer_v1.pdf
[92]
The Coalition Against Predictive Policing in Pittsburgh. 2020. Responding to the “Completion” of Predictive Policing in Pittsburgh. Retrieved January 16, 2024 from https://capp-pgh.com/files/Metro21%20Counter-Statement.pdf
[93]
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). ACM. https://doi.org/10.1145/3442188.3445901
[94]
Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, and Abeba Birhane. 2023. The Surveillance AI Pipeline. arxiv:2309.15084 [cs.CV]
[95]
Hamid Khan and Pete White. 2021. Police Surveillance Can’t Be Reformed. It Must Be Abolished. Retrieved January 16, 2024 from https://www.vice.com/en/article/xgzj7n/police-surveillance-cant-be-reformed-it-must-be-abolished
[96]
Rebecca Klar. 2023. Exclusive: Meta faces pressure to support independent audit of risk oversight committee. Retrieved January 16, 2024 from https://news.yahoo.com/exclusive-meta-faces-pressure-support-164507051.html
[97]
Amy Kraft. 2016. Microsoft shuts down AI chatbot after it turned into a Nazi. Retrieved January 16, 2024 from https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
[98]
Junhyup Kwon and Hyeong Yun. 2021. AI Chatbot Shut Down After Learning To Talk Like a Racist Asshole. Retrieved January 16, 2024 from https://www.vice.com/en/article/akd4g5/ai-chatbot-shut-down-after-learning-to-talk-like-a-racist-asshole
[99]
Colin Lecher. 2018. What happens when an algorithm cuts your health care. https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy
[100]
Nicol Turner Lee, Paul Resnick, and Genie Barton. 2019. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
[101]
Peter Lee. 2016. Learning from Tay’s introduction. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
[102]
Calvin Liang. 2021. Reflexivity, positionality, and disclosure in HCI. Retrieved January 16, 2024 from https://medium.com/@caliang/reflexivity-positionality-and-disclosure-in-hci-3d95007e9916
[103]
Q. Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arxiv:2306.01941 [cs.HC]
[104]
Kevin De Liban. 2018. Comments re: Notice of Rule-Making for ARChoices Program. Retrieved January 21, 2024 from https://www.arkleg.state.ar.us/Home/FTPDocument?path=%2FAssembly%2FMeeting+Attachments%2F430%2F663%2FHandout+1+-Legal+Aid-Kevin+De+Liban.pdf Email providing public comment on proposed revisions to ARChoices program.
[105]
Gabriel Lima, Nina Grgic-Hlaca, Jin Keun Jeong, and Meeyoung Cha. 2023. Who Should Pay When Machines Cause Harm? Laypeople’s Expectations of Legal Damages for Machine-Caused Harm. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 236–246. https://doi.org/10.1145/3593013.3593992
[106]
Antony Loewenstein. 2023. The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World. Verso. https://www.versobooks.com/products/2684-the-palestine-laboratory
[107]
Lynn Lomibao. 2021. Stop LAPD Spying Sues LAPD to Uncover Communications with UCLA Professor Who Founded PredPol, Inc.https://stoplapdspying.org/stop-lapd-spying-sues-lapd-for-communications-with-widely-condemned-ucla-professor-who-founded-predpol-inc/
[108]
My Ly. 2023. Arkansas DHS agrees to pay $460,000 to settle case over in-home care cuts. Retrieved January 16, 2024 from https://www.arkansasonline.com/news/2023/aug/09/arkansas-dhs-agrees-to-pay-460000-to-settle-case/
[109]
Angelica Mari. 2022. São Paulo subway ordered to suspend use of facial recognition. https://www.zdnet.com/article/sao-paulo-subway-ordered-to-suspend-use-of-facial-recognition/
[110]
Jesse Marx and Lilly Irani. 2021. Redacted. Taller California. https://www.printedmatter.org/catalog/58378/
[111]
Carole McGranahan. 2016. Theorizing Refusal: An Introduction. Cultural Anthropology 31 (08 2016), 319–325. https://doi.org/10.14506/ca31.3.01
[112]
Sean McGregor. 2020. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. arxiv:2011.08512 [cs.CY]
[113]
James Meadway. 2020. “Fuck the Algorithm”: How A-Level Students Have Shown the Future of Protest. https://novaramedia.com/2020/08/17/fuck-the-algorithm-how-a-level-students-have-shown-future-of-protest/
[114]
Dhruv Mehrotra and Dell Cameron. 2023. The Maker of ShotSpotter Is Buying the World’s Most Infamous Predictive Policing Tech. Retrieved January 16, 2024 from https://www.wired.com/story/soundthinking-geolitica-acquisition-predictive-policing/
[115]
Brian Merchant. 2023. Blood in the Machine: The Origins of the Rebellion Against Big Tech. Little, Brown and Company.
[116]
Jacob Metcalf. 2023. What federal agencies can learn from NYC’s AI Hiring Law. Retrieved January 16, 2024 from https://thehill.com/opinion/technology/4360523-what-federal-agencies-can-learn-from-new-york-citys-ai-hiring-law/
[117]
Jacob Metcalf, Ranjit Singh, Emanuel Moss, Emnet Tafesse, and Elizabeth Anne Watkins. 2023. Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1450–1462. https://doi.org/10.1145/3593013.3594092
[118]
Microsoft. 2022. Microsoft Responsible AI Standard, v2 (General Requirements). Retrieved July 21, 2023 from https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf
[119]
Yeshimabeit Milner. 2020. Abolish Big Data. https://medium.com/@YESHICAN/abolish-big-data-ad0871579a41
[120]
Aaron Mok. 2023. AI is expensive. A search on Google’s chatbot Bard costs the company 10 times more than a regular one, which could amount to several billion dollars.Retrieved January 16, 2024 from https://www.businessinsider.com/ai-expensive-google-chatbot-bard-may-cost-company-billions-dollars-2023-2
[121]
Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. 2021. Assembling accountability: algorithmic impact assessment for the public interest. Retrieved July 21, 2023 from https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf
[122]
Mozilla. 2023. Auditing AI: Announcing the 2023 Mozilla Technology Fund Cohort. Retrieved January 16, 2024 from https://foundation.mozilla.org/en/blog/auditing-ai-announcing-the-2023-mozilla-technology-fund-cohort/
[123]
Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. http://www.jstor.org/stable/j.ctt1pwt9w5
[124]
Oryem Nyeko. 2023. Uganda: Rights Concerns Over License Plate Tracking | Human Rights Watch. https://www.hrw.org/news/2023/11/14/uganda-rights-concerns-over-license-plate-tracking
[125]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
[126]
The Government of Canada. 2023. Algorithmic Impact Assessment tool. Retrieved July 21, 2023 from https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html
[127]
The White House Office of Science and Technology Policy. 2022. Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age. Retrieved October 2, 2023 from https://www.whitehouse.gov/ostp/news-updates/2022/10/04/blueprint-for-an-ai-bill-of-rightsa-vision-for-protecting-our-civil-rights-in-the-algorithmic-age/
[128]
UN OHCHR. 2022. OHCHR Assessment of Human Rights Concerns in the Xinjiang Uyghur Autonomous Region, People’s Republic of China.
[129]
Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, and Inioluwa Deborah Raji. 2024. Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling. arxiv:2402.17861 [cs.CY]
[130]
Kalia Orphanou, Jahna Otterbacher, Styliani Kleanthous, Khuyagbaatar Batsuren, Fausto Giunchiglia, Veronika Bogina, Avital Shulner Tal, Alan Hartman, and Tsvi Kuflik. 2022. Mitigating Bias in Algorithmic Systems—A Fish-eye View. ACM Comput. Surv. 55, 5, Article 87 (dec 2022), 37 pages. https://doi.org/10.1145/3527152
[131]
Phil Pennington. 2023. Facial recognition: Officials yet to meet obligation to seek views of Māori - documents. https://www.rnz.co.nz/news/national/501761/facial-recognition-officials-yet-to-meet-obligation-to-seek-views-of-maori-documents
[132]
Nick Perry. 2023. New Zealand debates whether ethnicity should be a factor for surgery waitlists. https://apnews.com/article/new-zealand-surgery-ethnicity-algorithm-maori-1b44026f2661772a7eb3bd4444619446
[133]
Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Comput. Surv. 55, 3, Article 51 (feb 2022), 44 pages. https://doi.org/10.1145/3494672
[134]
Brendan Pierson and Brendan Pierson. 2023. Lawsuit claims UnitedHealth AI wrongfully denies elderly extended care. Reuters (Nov 2023). https://www.reuters.com/legal/lawsuit-claims-unitedhealth-ai-wrongfully-denies-elderly-extended-care-2023-11-14/
[135]
Jon Porter. 2020. UK ditches exam results generated by biased algorithm after student protests. Retrieved January 16, 2024 from https://www.theverge.com/2020/8/17/21372045/uk-a-level-results-algorithm-biased-coronavirus-covid-19-pandemic-university-applications
[136]
Gabriel Puron-Cid and J. Ramon Gil-Garcia. 2022. Are Smart Cities Too Expensive in the Long Term? Analyzing the Effects of ICT Infrastructure on Municipal Financial Sustainability. Sustainability 14, 10 (2022). https://doi.org/10.3390/su14106055
[137]
Inioluwa Deborah Raji and Joy Buolamwini. 2022. Actionable Auditing Revisited: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Commun. ACM 66, 1 (dec 2022), 101–108. https://doi.org/10.1145/3571151
[138]
Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The Fallacy of AI Functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 959–972. https://doi.org/10.1145/3531146.3533158
[139]
Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. 2018. Algorithmic Impact Assessments: A Practical Framework for Public Agency. Retrieved July 21, 2023 from https://www.nist.gov/system/files/documents/2021/10/04/aiareport2018.pdf
[140]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. arxiv:1602.04938 [cs.LG]
[141]
Kat Roemmich, Tillie Rosenberg, Serena Fan, and Nazanin Andalibi. 2023. Values in Emotion Artificial Intelligence Hiring Services: Technosolutions to Organizational Problems. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 109 (apr 2023), 28 pages. https://doi.org/10.1145/3579543
[142]
Casey Ross. 2021. Epic’s AI algorithms, shielded from scrutiny by a corporate firewall, are delivering inaccurate information on seriously ill patients. Retrieved January 21, 2024 from https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation/
[143]
Tate Ryan-Mosley. 2023. An algorithm intended to reduce poverty might disqualify people in need. Retrieved January 16, 2024 from https://www.technologyreview.com/2023/06/13/1074551/an-algorithm-intended-to-reduce-poverty-in-jordan-disqualifies-people-in-need/
[144]
Tate Ryan-Mosley. 2023. How face recognition rules in the US got stuck in political gridlock. https://www.technologyreview.com/2023/07/24/1076668/how-face-recognition-rules-in-the-us-got-stuck-in-political-gridlock/
[145]
Nete Schwennesen. 2019. Algorithmic assemblages of care: imaginaries, epistemologies and repair work. Sociology of Health & Illness 41 (10 2019), 176–192. https://doi.org/10.1111/1467-9566.12900
[146]
Joey Scott. 2023. LAPD Is Using Israeli Surveillance Software That Can Track Your Phone and Social Media. Retrieved January 16, 2024 from https://knock-la.com/lapd-is-using-israeli-surveillance-software-that-can-track-your-phone-and-social-media/#: :text=During%20a%202014%20trip%20to,would%20be%20using%20all%20three
[147]
Nick Seaver. 2019. Knowing Algorithms. Princeton University Press, Princeton, 412–422. https://doi.org/
[148]
Andrew D. Selbst and Solon Barocas. 2018. The Intuitive Appeal of Explainable Machines. Fordham Law Review 87 (2018), 1085. https://api.semanticscholar.org/CorpusID:59548063
[149]
Andrew D. Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar. 2024. Deconstructing Design Decisions: Why Courts Must Interrogate Machine Learning and Other Technologies. Ohio State Law Journal 85 (2024). https://ssrn.com/abstract=4564304
[150]
Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (, Montréal, QC, Canada,) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 723–741. https://doi.org/10.1145/3600211.3604673
[151]
Hong Shen, Alicia DeVos, Motahhare Eslami, and Kenneth Holstein. 2021. Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 433 (oct 2021), 29 pages. https://doi.org/10.1145/3479577
[152]
Audra Simpson. 2007. On Ethnographic Refusal: Indigeneity, ‘Voice’ and Colonial Citizenship. Junctures (2007).
[153]
Michael Sisitzky and Ben Schaefer. 2021. The NYPD Published its Arsenal of Surveillance Tech. Here’s What We Learned. Retrieved January 16, 2024 from https://www.nyclu.org/en/news/nypd-published-its-arsenal-surveillance-tech-heres-what-we-learned
[154]
Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual &Group Unfairness via Inequality Indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (London, United Kingdom) (KDD ’18). Association for Computing Machinery, New York, NY, USA, 2239–2248. https://doi.org/10.1145/3219819.3220046
[155]
Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Ken Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. Imagining New Futures beyond Predictive Systems in Child Welfare: A Qualitative Study with Impacted Stakeholders. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1162–1177. https://doi.org/10.1145/3531146.3533177
[156]
Varshini Subhash, Zixi Chen, Marton Havasi, Weiwei Pan, and Finale Doshi-Velez. 2022. What Makes a Good Explanation?: A Harmonized View of Properties of Explanations. In Progress and Challenges in Building Trustworthy Embodied AI. https://openreview.net/forum?id=YDyLZWwpBK2
[157]
Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (–, NY, USA) (EAAMO ’21). Association for Computing Machinery, New York, NY, USA, Article 17, 9 pages. https://doi.org/10.1145/3465416.3483305
[158]
The Automated Regional Justice Information System. 2021. TACIDS: Tactical Identification System Using Facial Recognition. Retrieved January 16, 2024 from https://voiceofsandiego.org/wp-content/uploads/2021/04/TACIDS-Final-Report-FINAL.pdf
[159]
Kim TallBear. 2013. Native American DNA: Tribal Belonging and the False Promise of Genetic Science. U of Minnesota Press.
[160]
A. Toh and Human Rights Watch. 2023. Automated Neglect: How the World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights. Human Rights Watch. https://books.google.com/books?id=rNMG0AEACAAJ
[161]
Rhema Vaithianathan, Emily Kulick, Emily Putnam-Hornstein, and Diana Benavides Prado. 2019. Allegheny Family Screening Tool: Methodology, Version 2. Retrieved January 21, 2024 from https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Methodology-V2-from-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL-7.pdf
[162]
Matías Valderrama. 2021. Sistema Alerta Niñez y la predicción del riesgo de vulneración de derechos de la infancia. Derechos Digitales (2021).
[163]
C van Veen. 2020. Landmark judgment from The Netherlands on digital welfare states and human rights. Open Global Rights (2020).
[164]
Joana Varon and Paz Peña. 2022. Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems. Carr Center Discussion Paper Series (2022). https://carrcenter.hks.harvard.edu/publications/notmyai
[165]
James Vincent. [n. d.]. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day — theverge.com. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. [Accessed 22-01-2024].
[166]
Heather Vogell, Haru Coryne, and Ryan Little. 2022. Rent Going Up? One Company’s Algorithm Could Be Why.Retrieved January 16, 2024 from https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent
[167]
Angelina Wang, Sayash Kapoor, Solon Barocas, and Arvind Narayanan. 2023. Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 626. https://doi.org/10.1145/3593013.3594030
[168]
Fulton Wang and Cynthia Rudin. 2015. Falling Rule Lists. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics(Proceedings of Machine Learning Research, Vol. 38), Guy Lebanon and S. V. N. Vishwanathan (Eds.). PMLR, San Diego, California, USA, 1013–1022. https://proceedings.mlr.press/v38/wang15a.html
[169]
Nina Wang, Allison McDonald, Daniel Bateyko, and Emily Tucker. 2022. American Dragnet: Data-Driven Deportation in the 21st Century. (2022). Retrieved January 16, 2024 from https://americandragnet.org/
[170]
Elizabeth Warren. 2023. Warren, Lawmakers Urge Justice Department to Review YieldStar, Warn of De-Facto Price Setting and Collusion After Senate Investigation. Retrieved January 21, 2024 from https://www.warren.senate.gov/oversight/letters/warren-lawmakers-urge-justice-department-to-review-yieldstar-warn-of-de-facto-price-setting-and-collusion-after-senate-investigation Press Release.
[171]
Human Rights Watch. 2023. Automated Neglect: How The World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights. Retrieved January 16, 2024 from https://www.hrw.org/report/2023/06/13/automated-neglect/how-world-banks-push-allocate-cash-assistance-using-algorithms
[172]
Hilde Weerts, Miroslav Dudík, Richard Edgar, Adrin Jalali, Roman Lutz, and Michael Madaio. 2023. Fairlearn: Assessing and Improving Fairness of AI Systems. arxiv:2303.16626 [cs.LG]
[173]
Emma Weil and Elizabeth Edwards. 2023. Using Technical Skills to Fight Actual Public Benefits Cuts and Austerity Policies, with the Benefits Tech Advocacy Hub. (2023). https://www.youtube.com/watch?v=ZELeRPx74PE ACM FAccT Tutorial.
[174]
Kate Wells. 2023. An eating disorders chatbot offered dieting advice, raising fears about AI in health. Retrieved January 16, 2024 from https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea
[175]
Cedric Deslandes Whitney, Teresa Naval, Elizabeth Quepons, Simrandeep Singh, Steven R Rick, and Lilly Irani. 2021. HCI Tactics for Politics from Below: Meeting the Challenges of Smart Cities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (, Yokohama, Japan,) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 297, 15 pages. https://doi.org/10.1145/3411764.3445314
[176]
Zack Whittaker. 2019. Amazon shareholders reject facial recognition sale ban to governments. Retrieved January 16, 2024 from https://techcrunch.com/2019/05/22/amazon-reject-facial-recognition-proposals/
[177]
Christopher Wilkinson and Dorthy Lukens. 2023. The Growing Regulation of AI-Based Employment Decision Tools. Retrieved January 16, 2024 from https://www.perkinscoie.com/en/news-insights/the-growing-regulation-of-ai-based-employment-decision-tools.html
[178]
Timothy Williams. 2015. Facial Recognition Software Moves From Overseas Wars to Local Police. Retrieved January 16, 2024 from https://www.nytimes.com/2015/08/13/us/facial-recognition-software-moves-from-overseas-wars-to-local-police.html?_r=1
[179]
Tom Wilson and Madhumita Murgia. 2019. Uganda confirms use of Huawei facial recognition cameras. Retrieved January 16, 2024 from https://www.ft.com/content/e20580de-c35f-11e9-a8e9-296ca66511c9
[180]
Andrew Wong, Erkin Otles, John P. Donnelly, Andrew Krumm, Jeffrey McCullough, Olivia DeTroyer-Cooley, Justin Pestrue, Marie Phillips, Judy Konye, Carleen Penoza, Muhammad Ghous, and Karandeep Singh. 2021. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Internal Medicine 181, 8 (08 2021), 1065–1070. https://doi.org/10.1001/jamainternmed.2021.2626 arXiv:https://jamanetwork.com/journals/jamainternalmedicine/articlepdf/2781307/ jamainternal_wong_2021_oi_210027_1627674961.11707.pdf
[181]
Sarah Wu. 2019. Somerville City Council passes facial recognition ban. Retrieved January 16, 2024 from https://www.bostonglobe.com/metro/2019/06/27/somerville-city-council-passes-facial-recognition-ban/
[182]
Chloe Xiang. 2023. Eating Disorder Helpline Disables Chatbot for ’Harmful’ Responses After Firing Human Staff. Retrieved January 16, 2024 from https://www.vice.com/en/article/qjvk97/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff
[183]
Kyra Yee, Uthaipon Tantipongpipat, and Shubhanshu Mishra. 2021. Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 450 (oct 2021), 24 pages. https://doi.org/10.1145/3479594
[184]
Nur Yildirim, Changhoon Oh, Deniz Sayar, Kayla Brand, Supritha Challa, Violet Turri, Nina Crosby Walton, Anna Elise Wong, Jodi Forlizzi, James McCann, and John Zimmerman. 2023. Creating Design Resources to Scaffold the Ideation of AI Concepts. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (, Pittsburgh, PA, USA,) (DIS ’23). Association for Computing Machinery, New York, NY, USA, 2326–2346. https://doi.org/10.1145/3563657.3596058
[185]
Jonathan Zong and J. Nathan Matias. 2023. Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design. ACM J. Responsib. Comput. (oct 2023). https://doi.org/10.1145/3630107 Just Accepted.

Cited By

View all
  • (2024)AI in the Nonprofit Human Services: Distinguishing Between Hype, Harm, and HopeHuman Service Organizations: Management, Leadership & Governance10.1080/23303131.2024.2427459(1-12)Online publication date: 3-Dec-2024

Index Terms

  1. The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
    June 2024
    2580 pages
    ISBN:9798400704505
    DOI:10.1145/3630106
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 June 2024

    Check for updates

    Author Tags

    1. abandonment
    2. accountability
    3. contestation
    4. refusal

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    FAccT '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)490
    • Downloads (Last 6 weeks)85
    Reflects downloads up to 20 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)AI in the Nonprofit Human Services: Distinguishing Between Hype, Harm, and HopeHuman Service Organizations: Management, Leadership & Governance10.1080/23303131.2024.2427459(1-12)Online publication date: 3-Dec-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media