Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3442188.3445876acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Group Fairness: Independence Revisited

Published: 01 March 2021 Publication History

Abstract

This paper critically examines arguments against independence, a measure of group fairness also known as statistical parity and as demographic parity. In recent discussions of fairness in computer science, some have maintained that independence is not a suitable measure of group fairness. This position is at least partially based on two influential papers (Dwork et al., 2012, Hardt et al., 2016) that provide arguments against independence. We revisit these arguments, and we find that the case against independence is rather weak. We also give arguments in favor of independence, showing that it plays a distinctive role in considerations of fairness. Finally, we discuss how to balance different fairness considerations.

References

[1]
Angwin, J., J. Larson, S. Mattu, and L. Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. ProPublica.
[2]
Barocas, S., M. Hardt, and A. Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org.
[3]
Berk, R., H. Heidari, S. Jabbari, M. Kearns, and A. Roth. 2018. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research.
[4]
Chouldechova, A. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. ArXiv:1703.00056v1.
[5]
Corbett-Davies, S., E. Pierson, A. Feller, S. Goel, and A. Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. KKD '17: 797--806.
[6]
Dawid, A. P. 1979. Conditional Independence in Statistical Theory. Journal of the Royal Statistical Society. Series B (Methodological) 41(1): 1--31.
[7]
Dwork, C., M. Hardt, T. Pitassi, O. Reingold, and R. S. Zemel. 2012. Fairness through Awareness. Proc. ACM ITCS, pp. 214--226.
[8]
Hardt, M., E. Price, and N. Srebro. 2016. Equality of Opportunity in Supervised Learning. Advances in Neural Information Processing Systems.
[9]
Hertweck, C. 2020. Designing Affirmative Action Policies under Uncertainty. Master's thesis, University of Helsinki.
[10]
Kamishima, T., S. Akaho, and J. Sakuma. 2011. Fairness-aware Learning through Regularization Approach. 2011 IEEE 11th International Conference on Data Mining Workshops.
[11]
Kearns, M., S. Neel, A. Roth, and Z. S. Wu. 2018. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. PMLR 80: 2564--2572.
[12]
Kleinberg, J. M., S. Mullainathan, and M. Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. CoRR abs/1609.05807.
[13]
Loi, M., A. Herlitz, and H. Heidari. 2019. A Philosophical Theory of Fairness for Prediction-Based Decisions. Http://dx.doi.org/10.2139/ssrn.3450300.
[14]
Miller, D. 2017. Justice. The Stanford Encyclopedia of Philosophy.
[15]
Väyrynen, P. 2019. Thick Ethical Concepts. The Stanford Encyclopedia of Philosophy.
[16]
Wasserman, L. 2004. All of Statistics. Springer Texts in Statistics. New York: Springer.
[17]
Zemel, R., Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. 2013. Learning fair representations. ICML'13: 325--33.

Cited By

View all
  • (2024)What Fairness Metrics Can Really Tell You: A Case Study in the Educational DomainProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636873(792-799)Online publication date: 18-Mar-2024
  • (2024)The Conflict Between Algorithmic Fairness and Non-Discrimination: An Analysis of Fair Automated HiringProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659015(1907-1916)Online publication date: 3-Jun-2024
  • (2024)Insights From Insurance for Fair Machine LearningProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658914(407-421)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. accuracy
  2. affirmative action
  3. demographic parity
  4. fairness
  5. independence
  6. separation
  7. statistical parity
  8. sufficiency

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Swiss National Science Foundation

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)114
  • Downloads (Last 6 weeks)9
Reflects downloads up to 20 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)What Fairness Metrics Can Really Tell You: A Case Study in the Educational DomainProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636873(792-799)Online publication date: 18-Mar-2024
  • (2024)The Conflict Between Algorithmic Fairness and Non-Discrimination: An Analysis of Fair Automated HiringProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659015(1907-1916)Online publication date: 3-Jun-2024
  • (2024)Insights From Insurance for Fair Machine LearningProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658914(407-421)Online publication date: 3-Jun-2024
  • (2024)Reliability Gaps Between Groups in COMPAS DatasetProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658544(113-126)Online publication date: 3-Jun-2024
  • (2024)Achieving Equalized Explainability Through Data Reconstruction2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10651184(1-8)Online publication date: 30-Jun-2024
  • (2024)Objective metrics for ethical AI: a systematic literature reviewInternational Journal of Data Science and Analytics10.1007/s41060-024-00541-wOnline publication date: 13-Apr-2024
  • (2024)Policy advice and best practices on bias and fairness in AIEthics and Information Technology10.1007/s10676-024-09746-w26:2Online publication date: 29-Apr-2024
  • (2023)Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiersFrontiers in Artificial Intelligence10.3389/frai.2023.10994076Online publication date: 6-Apr-2023
  • (2023)A multidomain relational framework to guide institutional AI research and adoptionProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604718(512-519)Online publication date: 8-Aug-2023
  • (2023)Individual fairness for local private graph neural networkKnowledge-Based Systems10.1016/j.knosys.2023.110490268:COnline publication date: 23-May-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media