Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3616855.3635731acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
extended-abstract

Framework for Bias Detection in Machine Learning Models: A Fairness Approach

Published: 04 March 2024 Publication History

Abstract

The research addresses bias and inequity in binary classification problems in machine learning. Despite existing ethical frameworks for artificial intelligence, detailed guidance on practices and tech niques to address these issues is lacking. The main objective is to identify and analyze theoretical and practical components related to the detection and mitigation of biases and inequalities in machine learning. The proposed approach combines best practices, ethics, and technology to promote the responsible use of artificial intelligence in Colombia. The methodology covers the definition of performance and fairness interests, interventions in preprocessing, processing, and post-processing, and the generation of recommendations and explainability of the model.

References

[1]
DPN. Documento conpes 3920, 2018. Departamento Nacional de Planeación Colombia.
[2]
DPN. Documento conpes 3975, 2019. Departamento Nacional de Planeación Colombia.
[3]
Maria Paula Mujica Ramírez and Armando Guio Español. Misión de expertos en ia de colombia. pages 1--88, 2022.
[4]
Michelle Seng Ah Lee and Jatinder Singh. Risk identification questionnaire for detecting unintended bias in the machine learning development lifecycle. AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, (2019):704--714, 2021.
[5]
Andrew D Selbst, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. 1(1):1--17, 2018.
[6]
Ana Lavalle, Alejandro Maté, Juan Trujillo, and Jorge García. A methodology based on rebalancing techniques to measure and improve fairness in artificial intelligence algorithms. CEUR Workshop Proceedings, 3130:81--85, 2022.
[7]
Rashidul Islam, Shimei Pan, and James R. Foulds. Can we obtain fairness for free? AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 586--596, 2021.
[8]
Brianna Richardson, Prasanna Sattigeri, Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush Varshney, Amit Dhurandhar, and Juan E. Gilbert. Add-removeor-relabel: Practitioner-friendly bias mitigation via influential fairness. ACM International Conference Proceeding Series, pages 736--752, 2023.
[9]
Anant Saraswat, Manjish Pal, Subham Pokhriyal, and Kumar Abhishek. Towards fair machine learning using combinatorial methods. Evolutionary Intelligence, (0123456789), 2022.
[10]
Sorelle A. Friedler, Sonam Choudhary, Carlos Scheidegger, Evan P. Hamilton, Suresh Venkatasubramanian, and Derek Roth. A comparative study of fairnessenhancing interventions in machine learning. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pages 329--338, 2019.
[11]
Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems, 8:6478--6490, 2021.
[12]
Valliappa Lakshmanan, Sara Robinson, and Michael Munn. Machine learning design patterns. O'Reilly Media, 2020.
[13]
Damien Dablain, Bartosz Krawczyk, and Nitesh Chawla. Towards a holistic view of bias in machine learning: Bridging algorithmic fairness and imbalanced learning. 2022.
[14]
Serg Masís. Interpretable machine learning with Python: Learn to build interpretable high-performance models with hands-on real-world examples. Packt, 2021.
[15]
Investigating oversampling techniques for fair machine learning models. Lecture Notes in Business, 2021.
[16]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K.E. Bellamy, and Casey Dugan. Explaining models: An empirical study of how explanations impact fairness judgment. International Conference on Intelligent User Interfaces, Proceedings IUI, Part F1476:275--285, 2019.
[17]
Benjamin Fish and Luke Stark. Reflexive design for fairness and other human values in formal models. AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 89--99, 2021.
[18]
Mingyang Wan, Daochen Zha, Ninghao Liu, and Na Zou. In-processing modeling techniques for machine learning fairness: A survey. ACM Transactions on Knowledge Discovery from Data, 17(3), 2023.
[19]
Zhenpeng Chen, Jie M. Zhang, Federica Sarro, and Mark Harman. A comprehensive empirical study of bias mitigation methods for machine learning classifiers. ACM Transactions on Software Engineering and Methodology, 32(4):1--30, 2023.
[20]
Nianyun Li, Naman Goel, and Elliott Ash. Data-centric factors in algorithmic fairness. AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 396--410, 2022.
[21]
Joymallya Chakraborty, Suvodeep Majumder, and Tim Menzies. Bias in machine learning software: Why? how? what to do?, volume 1. Association for Computing Machinery, 2021.
[22]
Simon Caton, Saiteja Malisetty, and Christian Haas. Impact of imputation strategies on fairness in machine learning. Journal of Artificial Intelligence Research, 74:1011--1035, 2022.
[23]
Teresa Salazar, Miriam Seoane Santos, Helder Araujo, and Pedro Henriques Abreu. Fawos: Fairness-aware oversampling algorithm based on distributions of sensitive attributes. IEEE Access, 9:81370--81379, 2021.
[24]
Flavio P. Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems, 2017- Decem(Nips):3993--4002, 2017.
[25]
Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination, volume 33. 2012.
[26]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2564--2572. PMLR, 10--15 Jul 2018.
[27]
Chowdhury Mohammad Rakin Haider, Chris Clifton, and Yan Zhou. Unfair ai: It isn't just biased data. In 2022 IEEE International Conference on Data Mining (ICDM), pages 957--962, 2022.
[28]
Saeid Tizpaz-Niari, Ashish Kumar, Gang Tan, and Ashutosh Trivedi. Fairnessaware configuration of machine learning libraries. Proceedings - International Conference on Software Engineering, 2022-May:909--920, 2022.
[29]
Mike H.M. Teodorescu and Xinyu Yao. Machine learning fairness is computationally difficult and algorithmically unsatisfactorily solved. 2021 IEEE High Performance Extreme Computing Conference, HPEC 2021, 2021.
[30]
Hantian Zhang, Xu Chu, Abolfazl Asudeh, and Shamkant B. Navathe. Omnifair: A declarative system for model-agnostic group fairness in machine learning. Proceedings of the ACM SIGMOD International Conference on Management of Data, pages 2076--2088, 2021.
[31]
Andre F.cruz, Pedro Saleiro, Catarina Belem, Carlos Soares, and Pedro Bizarro. Promoting fairness through hyperparameter optimization. Proceedings - IEEE International Conference on Data Mining, ICDM, 2021-Decem(Icdm):1036--1041, 2021.
[32]
Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, and Ed H. Chi. Putting fairness principles into practice: Challenges, metrics, and improvements. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 453--459, 2019.
[33]
Jose M. Alvarez, Kristen M. Scott, Bettina Berendt, and Salvatore Ruggieri. Domain adaptive decision trees: Implications for accuracy and fairness. In 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23. ACM, June 2023.
[34]
Pádraig Cunningham and Sarah Jane Delany. Underestimation Bias and Underfitting in Machine Learning, page 20--31. Springer International Publishing, 2021.
[35]
David Lovell, Bridget McCarron, Brendan Langfield, Khoa Tran, and Andrew P. Bradley. Taking the confusion out of multinomial confusion matrices and imbalanced classes. Communications in Computer and Information Science, 1504 CCIS:16--30, 2021.
[36]
Bishwamittra Ghosh, Debabrota Basu, and Kuldeep S. Meel. How biased are your features?: Computing fairness influence functions with global sensitivity analysis. ACM International Conference Proceeding Series, pages 138--148, 2023.
[37]
Sofie Goethals, David Martens, and Toon Calders. PreCoF: counterfactual explanations for fairness. Number January. Springer US, 2023.
[38]
Jin Young Kim and Sung Bae Cho. Fair representation for safe artificial intelligence via adversarial learning of unbiased information bottleneck. CEUR Workshop Proceedings, 2560:105--112, 2020.
[39]
Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. Semi-fairvae: Semisupervised fair representation learning with adversarial variational autoencoder, 2022.
[40]
Thibault Laugel, Adulam Jeyasothy, Marie Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. Achieving diversity in counterfactual explanations: a review and discussion. ACM International Conference Proceeding Series, pages 1859--1869, 2023.
[41]
Guanhong Tao, Weisong Sun, Tingxu Han, Chunrong Fang, and Xiangyu Zhang. Ruler: discriminative and iterative adversarial training for deep neural network fairness. ESEC/FSE 2022 - Proceedings of the 30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, (Idi):1173--1184, 2022.
[42]
Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. Fat forensics: A python toolbox for algorithmic fairness, accountability and transparency[formula presented]. Software Impacts, 14(July):100406, 2022.
[43]
James M. Hickey, Pietro G. Di Stefano, and Vlasios Vasileiou. Fairness by Explicability and Adversarial SHAP Learning, volume 12459 LNAI. Springer International Publishing, 2021.
[44]
Tom Van Nuenen, Xavier Ferrer, Jose M. Such, and Mark Cote. Transparency for whom? assessing discriminatory artificial intelligence. Computer, 53(11):36--44, 2020.

Index Terms

  1. Framework for Bias Detection in Machine Learning Models: A Fairness Approach

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '24: Proceedings of the 17th ACM International Conference on Web Search and Data Mining
    March 2024
    1246 pages
    ISBN:9798400703713
    DOI:10.1145/3616855
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 March 2024

    Check for updates

    Author Tags

    1. bias mitigation
    2. explainability
    3. machine learning fairness
    4. supervised learning

    Qualifiers

    • Extended-abstract

    Conference

    WSDM '24

    Acceptance Rates

    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 134
      Total Downloads
    • Downloads (Last 12 months)134
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 01 Oct 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media