Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3351095.3372878acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Fairness is not static: deeper understanding of long term fairness via simulation studies

Published: 27 January 2020 Publication History

Abstract

As machine learning becomes increasingly incorporated within high impact decision ecosystems, there is a growing need to understand the long-term behaviors of deployed ML-based decision systems and their potential consequences. Most approaches to understanding or improving the fairness of these systems have focused on static settings without considering long-term dynamics. This is understandable; long term dynamics are hard to assess, particularly because they do not align with the traditional supervised ML research framework that uses fixed data sets. To address this structural difficulty in the field, we advocate for the use of simulation as a key tool in studying the fairness of algorithms. We explore three toy examples of dynamical systems that have been previously studied in the context of fair decision making for bank loans, college admissions, and allocation of attention. By analyzing how learning agents interact with these systems in simulation, we are able to extend previous work, showing that static or single-step analyses do not give a complete picture of the long-term consequences of an ML-based decision system. We provide an extensible open-source software framework for implementing fairness-focused simulation studies and further reproducible research, available at https://github.com/google/ml-fairness-gym.

Supplementary Material

PDF File (p525-d_amour-supp.pdf)
Supplemental material.

References

[1]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[2]
Michael Brückner and Tobias Scheffer. 2011. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 547--555.
[3]
Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker, and Kate Crawford. 2017. AI now 2017 report. AI Now Institute at New York University (2017).
[4]
Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. 2018. Dopamine: A Research Framework for Deep Reinforcement Learning. (2018). http://arxiv.org/abs/1812.06110
[5]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
[6]
Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, and Zachary Schutzman. 2019. Fair algorithms for learning in allocation problems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 170--179.
[7]
Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. 2018. Runaway Feedback Loops in Predictive Policing. In Conference on Fairness, Accountability and Transparency. ACM, 160--171.
[8]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259--268.
[9]
Jay W Forrester. 2007. System dynamics - a personal view of the first fifty years. System Dynamics Review: The Journal of the System Dynamics Society 23, 2-3 (2007), 345--358.
[10]
Ian Hacking. 1986. Making Up People. In Reconstructing individualism: Autonomy, individuality, and the self in Western thought, Thomas C Heller, Morton Sosna, and David E Wellberry (Eds.). Stanford University Press.
[11]
Ian Hacking. 1995. The looping effects of human kinds. (1995).
[12]
Ian Hacking, Jan Hacking, et al. 1999. The social construction of what? Harvard university press.
[13]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science. ACM, 111--122.
[14]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems. Curran Associates Inc., 3323--3331.
[15]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900--915.
[16]
Lily Hu, Nicole Immorlica, and Jennifer Wortman Vaughan. 2019. The disparate effects of strategic manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 259--268.
[17]
Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2017. Fairness in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 1617--1626.
[18]
Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. 2016. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems. 325--333.
[19]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication. IEEE, 1--6.
[20]
Sampath Kannan, Aaron Roth, and Juba Ziani. 2019. Downstream effects of affirmative action. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 240--248.
[21]
Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed Impact of Fair Machine Learning. In Proceedings of the 35th International Conference on Machine Learning.
[22]
Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt. 2019. The Social Cost of Strategic Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 230--239.
[23]
Shira Mitchell, Eric Potash, and Solon Barocas. 2018. Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867 (2018).
[24]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529.
[25]
ProPublica. [n.d.]. compas-analysis. https://github.com/propublica/compas-analysis/
[26]
Rashida Richardson, Jason Schultz, and Kate Crawford. 2019. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online, Forthcoming (2019).
[27]
Ken Ross. 2007. A mathematician at the ballpark: Odds and probabilities for baseball fans. Penguin.
[28]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 59--68.
[29]
Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2239--2248.
[30]
John D Sterman. 2001. System dynamics modeling: tools for learning in a complex world. California management review 43, 4 (2001), 8--25.
[31]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction.
[32]
Sebastian Thrun, Wolfram Burgard, and Dieter Fox. 2005. Probabilistic robotics.

Cited By

View all
  • (2024)Supporting Better Insights of Data Science Pipelines with Fine-grained ProvenanceACM Transactions on Database Systems10.1145/364438549:2(1-42)Online publication date: 10-Apr-2024
  • (2024)Oh, Behave! Country Representation Dynamics Created by Feedback Loops in Music Recommender SystemsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688187(1022-1027)Online publication date: 8-Oct-2024
  • (2024)From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term UnemploymentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659020(1984-2006)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
January 2020
895 pages
ISBN:9781450369367
DOI:10.1145/3351095
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2020

Check for updates

Qualifiers

  • Research-article

Conference

FAT* '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,418
  • Downloads (Last 6 weeks)127
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Supporting Better Insights of Data Science Pipelines with Fine-grained ProvenanceACM Transactions on Database Systems10.1145/364438549:2(1-42)Online publication date: 10-Apr-2024
  • (2024)Oh, Behave! Country Representation Dynamics Created by Feedback Loops in Music Recommender SystemsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688187(1022-1027)Online publication date: 8-Oct-2024
  • (2024)From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term UnemploymentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659020(1984-2006)Online publication date: 3-Jun-2024
  • (2024)Law and the Emerging Political Economy of Algorithmic AuditsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658970(1255-1267)Online publication date: 3-Jun-2024
  • (2024)Structural Interventions and the Dynamics of InequalityProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658952(1014-1030)Online publication date: 3-Jun-2024
  • (2024)Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group FairnessProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658929(616-630)Online publication date: 3-Jun-2024
  • (2024)Insights From Insurance for Fair Machine LearningProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658914(407-421)Online publication date: 3-Jun-2024
  • (2024)Operationalizing the Search for Less Discriminatory Alternatives in Fair LendingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658912(377-387)Online publication date: 3-Jun-2024
  • (2024)A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to WorkersProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658900(207-220)Online publication date: 3-Jun-2024
  • (2024)Designing Long-term Group Fair Policies in Dynamical SystemsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658538(20-50)Online publication date: 3-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media