Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3391403.3399545acmconferencesArticle/Chapter ViewAbstractPublication PagesecConference Proceedingsconference-collections
abstract

Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics

Published: 13 July 2020 Publication History

Abstract

Why do biased algorithmic predictions arise, and what interventions can prevent them? We examine this topic with a field experiment about using machine learning to predict human capital. We randomly assign approximately 400 AI engineers to develop software under different experimental conditions to predict standardized test scores of OECD residents. We then assess the resulting predictive algorithms using the realized test performances, and through randomized audit-like manipulations of algorithmic inputs. We also used the diversity of our subject population to measure whether demographically non-traditional engineers were more likely to notice and reduce algorithmic bias, and whether algorithmic prediction errors are correlated within programmer demographic groups. This document describes our experimental design and motivation; the full results of our experiment are available at https://ssrn.com/abstract=3615404.

References

[1]
Amanda Agan, Bo Cowgill, and Laura Gee. 2019. The Effects of Salary History Bans: Evidence from a Field Experiment. Working paper (2019).
[2]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review, Vol. 94, 4 (2004), 991--1013.
[3]
Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, Vol. 16 (2002), 321--357.
[4]
Corinna Cortes, Mehryar Mohri, Michael Riley, and Afshin Rostamizadeh. 2008. Sample selection bias correction theory. In International conference on algorithmic learning theory. Springer, 38--53.
[5]
Bo Cowgill. 2019. Automating Judgement and Decision making: Theory and Evidence from Résumé Screening. Working paper (2019).
[6]
Bo Cowgill, Fabrizio Dell'Acqua, and Sandra Matz. 2020. The Managerial Effects of Algorithmic Fairness Activism. In AEA Papers and Proceedings, Vol. 110. 85--90.
[7]
Bo Cowgill and Patryk Perkowski. 2019. Agency and Homophily: Evidence from a Two-Sided Audit. Working Paper (2019).
[8]
Bo Cowgill and Catherine E Tucker. 2019. Economics, fairness and algorithmic bias. preparation for: Journal of Economic Perspectives (2019).
[9]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2017. Human decisions and machine predictions. Technical Report. National Bureau of Economic Research.
[10]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 469--481.
[11]
Andreas Schleicher. 2008. PIAAC: A new strategy for assessing adult competencies. International Review of Education, Vol. 54, 5--6 (2008), 627--650.
[12]
Bianca Zadrozny, John Langford, and Naoki Abe. 2003. Cost-Sensitive Learning by Cost-Proportionate Example Weighting. In ICDM, Vol. 3. 435.

Cited By

View all
  • (2024)Assessing and Mitigating Bias in Artificial Intelligence: A ReviewRecent Advances in Computer Science and Communications10.2174/266625581666623052311442517:1Online publication date: Jan-2024
  • (2024)Algorithmic Bias and Historical Injustice: Race and Digital ProfilingSSRN Electronic Journal10.2139/ssrn.4812943Online publication date: 2024
  • (2024)Does Artificial Intelligence Help or Hurt Gender Diversity? Evidence from Two Field Experiments on Recruitment in TechSSRN Electronic Journal10.2139/ssrn.4764343Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
EC '20: Proceedings of the 21st ACM Conference on Economics and Computation
July 2020
937 pages
ISBN:9781450379755
DOI:10.1145/3391403
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2020

Check for updates

Author Tags

  1. algorithmic fairness
  2. field experiment

Qualifiers

  • Abstract

Funding Sources

Conference

EC '20
Sponsor:
EC '20: The 21st ACM Conference on Economics and Computation
July 13 - 17, 2020
Virtual Event, Hungary

Acceptance Rates

Overall Acceptance Rate 664 of 2,389 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)444
  • Downloads (Last 6 weeks)43
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Assessing and Mitigating Bias in Artificial Intelligence: A ReviewRecent Advances in Computer Science and Communications10.2174/266625581666623052311442517:1Online publication date: Jan-2024
  • (2024)Algorithmic Bias and Historical Injustice: Race and Digital ProfilingSSRN Electronic Journal10.2139/ssrn.4812943Online publication date: 2024
  • (2024)Does Artificial Intelligence Help or Hurt Gender Diversity? Evidence from Two Field Experiments on Recruitment in TechSSRN Electronic Journal10.2139/ssrn.4764343Online publication date: 2024
  • (2024)A feeling for the algorithm: Diversity, expertise, and artificial intelligenceBig Data & Society10.1177/2053951723122424711:1Online publication date: 8-Jan-2024
  • (2024)Participant Use of Artificial Intelligence in Online Focus Groups: An Experiential AccountInternational Journal of Qualitative Methods10.1177/1609406924128641723Online publication date: 15-Oct-2024
  • (2024)Cognitively Biased Users Interacting with Algorithmically Biased Results in Whole-Session Search on Debated TopicsProceedings of the 2024 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3664190.3672520(227-237)Online publication date: 2-Aug-2024
  • (2024)Ethics in the Age of AI: An Analysis of AI Practitioners’ Awareness and ChallengesACM Transactions on Software Engineering and Methodology10.1145/363571533:3(1-35)Online publication date: 15-Mar-2024
  • (2024)Exploring Fairness-Accuracy Trade-Offs in Binary Classification: A Comparative Analysis Using Modified Loss FunctionsProceedings of the 2024 ACM Southeast Conference10.1145/3603287.3651192(148-156)Online publication date: 18-Apr-2024
  • (2024)Delegation in Hiring: Evidence from a Two-Sided AuditJournal of Political Economy Microeconomics10.1086/7321272:4(852-882)Online publication date: 1-Nov-2024
  • (2024)Neuromarketing algorithms’ consumer privacy and ethical considerations: challenges and opportunitiesCogent Business & Management10.1080/23311975.2024.233306311:1Online publication date: 4-Apr-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media