Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3394486.3403263acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Algorithmic Decision Making with Conditional Fairness

Published: 20 August 2020 Publication History

Abstract

Nowadays fairness issues have raised great concerns in decision-making systems. Various fairness notions have been proposed to measure the degree to which an algorithm is unfair. In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices. The effects of fair variables are irrelevant in assessing the fairness of the decision support algorithm. We thus define conditional fairness as a more sound fairness metric by conditioning on the fairness variables. Given different prior knowledge of fair variables, we demonstrate that traditional fairness notations, such as demographic parity and equalized odds, are special cases of our conditional fairness notations. Moreover, we propose a Derivable Conditional Fairness Regularizer (DCFR), which can be integrated into any decision-making model, to track the trade-off between precision and fairness of algorithmic decision making. Specifically, an adversarial representation based conditional independence loss is proposed in our DCFR to measure the degree of unfairness. With extensive experiments on three real-world datasets, we demonstrate the advantages of our conditional fairness notation and DCFR.

Supplementary Material

MP4 File (3394486.3403263.mp4)
Nowadays fairness issues have raised great concerns in decision-making systems. Various fairness notions have been proposed to measure the degree to which an algorithm is unfair. In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices. The effects of fair variables are irrelevant in assessing the fairness of the decision support algorithm. We thus define conditional fairness as a more sound fairness metric by conditioning on the fairness variables. Moreover, we propose a Derivable Conditional Fairness Regularizer (DCFR), which can be integrated into any decision-making model, to track the trade-off between precision and fairness of algorithmic decision making. With extensive experiments on three real-world datasets, we demonstrate the advantages of our conditional fairness notation and DCFR.

References

[1]
Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. 2019. One-network adversarial fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 2412--2420.
[2]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In International Conference on Machine Learning. 60--69.
[3]
Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075 (2017).
[4]
Peter J Bickel, Eugene A Hammel, and J William O'Connell. 1975. Sex bias in graduate admissions: Data from Berkeley. Science, Vol. 187, 4175 (1975), 398--404.
[5]
Minnesota Population Center. 2019. Integrated public use microdata series: Version 7.2 [dataset]. Minneapolis, MN: IPUMS (2019).
[6]
Silvia Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 7801--7808.
[7]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 797--806.
[8]
JJ Daudin. 1980. Partial association measures and an application to qualitative regression. Biometrika, Vol. 67, 3 (1980), 581--590.
[9]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
[10]
Sanghamitra Dutta, Praveen Venkatesh, Piotr Mardziel, Anupam Datta, and Pulkit Grover. 2020. An Information-Theoretic Quantification of Discrimination with Exempt Features. In AAAI. 3825--3833.
[11]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[12]
Harrison Edwards and Amos Storkey. 2015. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897 (2015).
[13]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259--268.
[14]
Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. 2008. Kernel measures of conditional dependence. In Advances in neural information processing systems. 489--496.
[15]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323.
[16]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In International Conference on Machine Learning. 1929--1938.
[17]
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, Vol. 33, 1 (2012), 1--33.
[18]
Faisal Kamiran, Indr.e vZ liobait.e, and Toon Calders. 2013. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems, Vol. 35, 3 (2013), 613--644.
[19]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666.
[20]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
[21]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066--4076.
[22]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How we analyzed the COMPAS recidivism algorithm. ProPublica (5 2016), Vol. 9 (2016).
[23]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning Adversarially Fair and Transferable Representations. In International Conference on Machine Learning. 3384--3393.
[24]
Amitabha Mukerjee, Rita Biswas, Kalyanmoy Deb, and Amrit P Mathur. 2002. Multi--objective evolutionary algorithms for the risk--return trade-off in bank loan management. International Transactions in operational research, Vol. 9, 5 (2002), 583--597.
[25]
Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In Thirty-Second AAAI Conference on Artificial Intelligence.
[26]
Judea Pearl. 2009. Causality .Cambridge university press.
[27]
Joseph D Ramsey. 2014. A scalable conditional independence test for nonlinear, non-gaussian data. arXiv preprint arXiv:1401.5031 (2014).
[28]
Lauren A Rivera. 2012. Hiring as cultural matching: The case of elite professional service firms. American sociological review, Vol. 77, 6 (2012), 999--1022.
[29]
Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. 2017. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems. 6414--6423.
[30]
Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, and Kenji Fukumizu. 2013. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. The Annals of Statistics (2013), 2263--2291.
[31]
Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. 2000. Causation, prediction, and search .MIT press.
[32]
Eric V Strobl, Kun Zhang, and Shyam Visweswaran. 2019. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. Journal of Causal Inference, Vol. 7, 1 (2019).
[33]
Hao Wang, Berk Ustun, and Flavio Calmon. 2019. Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions. In International Conference on Machine Learning. 6618--6627.
[34]
Yongkai Wu, Lu Zhang, Xintao Wu, and Hanghang Tong. 2019. PC-Fairness: A Unified Framework for Measuring Causality-based Fairness. In Advances in Neural Information Processing Systems. 3399--3409.
[35]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web. 1171--1180.
[36]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325--333.
[37]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 335--340.
[38]
Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2012. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775 (2012).
[39]
Han Zhao, Amanda Coston, Tameem Adel, and Geoffrey J. Gordon. 2020. Conditional Learning of Fair Representations. In International Conference on Learning Representations.

Cited By

View all
  • (2024)Gradient-Based Local Causal Structure LearningIEEE Transactions on Cybernetics10.1109/TCYB.2023.323763554:1(486-495)Online publication date: Jan-2024
  • (2024)Learning fair representations via rebalancing graph structureInformation Processing & Management10.1016/j.ipm.2023.10357061:1(103570)Online publication date: Jan-2024
  • (2023)Ai Enforcement: Examining the Impact of Ai on Judicial Fairness and Public SafetySSRN Electronic Journal10.2139/ssrn.4533047Online publication date: 2023
  • Show More Cited By

Index Terms

  1. Algorithmic Decision Making with Conditional Fairness

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
    August 2020
    3664 pages
    ISBN:9781450379984
    DOI:10.1145/3394486
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 August 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. conditional independence
    2. fairness

    Qualifiers

    • Research-article

    Conference

    KDD '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)95
    • Downloads (Last 6 weeks)10
    Reflects downloads up to 19 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Gradient-Based Local Causal Structure LearningIEEE Transactions on Cybernetics10.1109/TCYB.2023.323763554:1(486-495)Online publication date: Jan-2024
    • (2024)Learning fair representations via rebalancing graph structureInformation Processing & Management10.1016/j.ipm.2023.10357061:1(103570)Online publication date: Jan-2024
    • (2023)Ai Enforcement: Examining the Impact of Ai on Judicial Fairness and Public SafetySSRN Electronic Journal10.2139/ssrn.4533047Online publication date: 2023
    • (2023)Personalized Pricing with Group Fairness ConstraintProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594097(1520-1530)Online publication date: 12-Jun-2023
    • (2023)Causal Discovery and Causal Inference Based Counterfactual Fairness in Machine LearningICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10095194(1-5)Online publication date: 4-Jun-2023
    • (2023)PreCoF: counterfactual explanations for fairnessMachine Learning10.1007/s10994-023-06319-8113:5(3111-3142)Online publication date: 28-Mar-2023
    • (2022)Towards Principled User-side Recommender SystemsProceedings of the 31st ACM International Conference on Information & Knowledge Management10.1145/3511808.3557476(1757-1766)Online publication date: 17-Oct-2022
    • (2022)Regulatory Instruments for Fair Personalized PricingProceedings of the ACM Web Conference 202210.1145/3485447.3512046(4-15)Online publication date: 25-Apr-2022
    • (2022)A brief review on algorithmic fairnessManagement System Engineering10.1007/s44176-022-00006-z1:1Online publication date: 10-Nov-2022
    • (2022)Algorithmic fairness datasets: the story so farData Mining and Knowledge Discovery10.1007/s10618-022-00854-z36:6(2074-2152)Online publication date: 1-Nov-2022
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media