Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3593013.3594039acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness

Published: 12 June 2023 Publication History

Abstract

Commensurate with the rise in algorithmic bias research, myriad algorithmic bias mitigation strategies have been proposed in the literature. Nonetheless, many voice concerns about the lack of transparency that accompanies mitigation methods and the paucity of mitigation methods that satisfy protocol and data limitations of practitioners. Influence functions from robust statistics provide a novel opportunity to overcome both issues. Previous work demonstrates the power of influence functions to improve fairness outcomes. This work proposes a novel family of fairness solutions, coined influential fairness (IF), that is human-understandable and also agnostic to the underlying machine learning model and choice of fairness metric. We conduct an investigation of practitioner profiles and design mitigation methods for practitioners whose limitations discourage them from utilizing existing bias mitigation methods.

References

[1]
Philip Adler, Casey Falk, Sorelle A. Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. 2018. Auditing Black-Box Models for Indirect Influence. Knowledge and Information Systems 54, 1 (Jan. 2018), 95–122. https://doi.org/10.1007/S10115-017-1116-3/TABLES/3 arxiv:1602.07043
[2]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In Proceedings of the International Conference on Machine Learning. 102–119. arxiv:1803.02453
[3]
Alekh Agarwal, Miroslav Dudik, and Zhiwei Steven Wu. 2019. Fair Regression: Quantitative Definitions and Reduction-Based Algorithms. In Proceedings of the International Conference on Machine Learning. 120–129.
[4]
Ahmed M Alaa and Mihaela Van Der Schaar. 2019. Validating Causal Inference Models via Influence Functions. In 36th International Conference on Machine Learning. PMLR, 191–201. https://proceedings.mlr.press/v97/alaa19a.html
[5]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. Technical Report. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[6]
Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger Grosse. 2022. If Influence Functions are the Answer, Then What is the Question? (sep 2022). https://doi.org/10.48550/arxiv.2209.05364 arxiv:2209.05364
[7]
Niels Bantilan. 2017. Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. In Bloomberg Data for Good Exchange Conference. arxiv:1710.06921v1
[8]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and machine learning. fairmlbook.org. https://fairmlbook.org/index.html
[9]
Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. SSRN Electronic Journal 104 (mar 2016), 671–732. https://doi.org/10.2139/ssrn.2477899
[10]
Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. 2020. RelatIF: Identifying Explanatory Training Examples via Relative Influence. Proceedings of Machine Learning Research 108 (mar 2020), 26–28. https://doi.org/10.48550/arxiv.2003.11630 arxiv:2003.11630
[11]
Samyadeep Basu, Phillip Pope, and Soheil Feizi. 2020. Influence Functions in Deep Learning Are Fragile. (jun 2020). https://doi.org/10.48550/arxiv.2006.14651 arxiv:2006.14651
[12]
Samyadeep Basu, Xuchen You, and Soheil Feizi. 2020. On Second-Order Group Influence Functions for Black-Box Predictions. In ICML’20: Proceedings of the 37th International Conference on Machine Learning. 715–724. https://doi.org/10.5555/3524938
[13]
Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics 6 (2018), 587–604. https://doi.org/10.1162/tacl_a_00041
[14]
J. Bennett and S. Lanning. 2007. The Netflix Prize. In Proceedings of Kdd Cup and Workshop.
[15]
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems(NIPS’16). Curran Associates Inc., 4356–4364.
[16]
Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2019. Machine Unlearning. Proceedings - IEEE Symposium on Security and Privacy 2021-May (dec 2019), 141–159. https://doi.org/10.48550/arxiv.1912.03817 arxiv:1912.03817
[17]
Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, and Yuekai Sun. 2021. Individually Fair Rankings. In International Conference on Learning Representations.
[18]
Marc Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2018. Understanding the Origins of Bias in Word Embeddings. 36th International Conference on Machine Learning, ICML 2019 2019-June (oct 2018), 1275–1294. https://doi.org/10.48550/arxiv.1810.03611 arxiv:1810.03611
[19]
David Buil-Gil, Angelo Moretti, and Samuel H. Langton. 2021. The accuracy of crime statistics: assessing the impact of police data bias on geographic crime analysis. Journal of Experimental Criminology (mar 2021), 1–27. https://doi.org/10.1007/S11292-021-09457-Y/TABLES/9
[20]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification *. In Proceedings of Machine Learning Research, Vol. 81. 1–15.
[21]
Ian Burn, Daniel Firoozi, Daniel Ladd, and David Neumark. 2021. Machine Learning and Perceived Age Stereotypes in Job Ads: Evidence from an Experiment. (jan 2021). https://doi.org/10.3386/W28328
[22]
Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Min Knowl Disc 21 (2010), 277–292. https://doi.org/10.1007/s10618-010-0190-x
[23]
Toon Calders and Indrė Žliobaitė. 2013. Why unbiased computational processes can lead to discriminative decision procedures. In Studies in Applied Philosophy, Epistemology and Rational Ethics. Vol. 3. Springer International Publishing, 43–57. https://doi.org/10.1007/978-3-642-30487-3_3
[24]
Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. (oct 2020). https://doi.org/10.48550/arxiv.2010.04053 arxiv:2010.04053
[25]
L. Elisa Celis, Amit Deshpande, Tarun Kathuria, and Nisheeth K. Vishnoi. 2016. How to be Fair and Diverse? (oct 2016). https://doi.org/10.48550/arxiv.1610.07183 arxiv:1610.07183
[26]
Navoneel Chakrabarty, Tuhin Kundu, Sudipta Dandapat, Apurba Sarkar, and Dipak Kumar Kole. 2019. Flight arrival delay prediction using gradient boosting classifier. Advances in Intelligent Systems and Computing 813 (2019), 651–659. https://doi.org/10.1007/978-981-13-1498-8_57/COVER
[27]
Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, and Cho-Jui Hsieh. 2020. Multi-Stage Influence Function. In 34th International Conference on Neural Information Processing Systems. 12732–12742. https://doi.org/10.5555/3495724.3496792
[28]
Irene Y. Chen, Fredrik D. Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory?Advances in Neural Information Processing Systems 2018-December (may 2018), 3539–3550. https://doi.org/10.48550/arxiv.1805.12002 arxiv:1805.12002
[29]
Weiyu Cheng, Linpeng Huang, Yanyan Shen, and Yanmin Zhu. 2019. Incorporating interpretability into latent factor models via fast influence analysis. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (jul 2019), 885–893. https://doi.org/10.1145/3292500.3330857
[30]
R. Dennis Cook and Sanford Weisberg. 1980. Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression. Technometrics 22, 4 (nov 1980), 495. https://doi.org/10.2307/1268187
[31]
Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R. Varshney, Skyler Speakman, Zairah Mustahsan, and Supriyo Chakraborty. 2019. Fair transfer learning with missing protected attributes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 91–98.
[32]
Bo Cowgill and Catherine Tucker. 2017. Algorithmic Bias : A Counterfactual Perspective. In NSF Trustworthy Algorithms.
[33]
Kate Crawford. 2017. The Trouble with Bias. https://www.youtube.com/watch?v=fMym_BKWQzk
[34]
Pietro G. Di Stefano, James M. Hickey, and Vlasios Vasileiou. 2020. Counterfactual fairness: removing direct effects through regularization. (2020). arxiv:2002.10774http://arxiv.org/abs/2002.10774
[35]
Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. 2014. Influence function learning in information diffusion networks | Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32. In 31st International Conference on Machine Learning. 2016–2024. https://dl.acm.org/doi/abs/10.5555/3044805.3045117
[36]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
[37]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2011. Fairness Through Awareness. Technical Report. arxiv:1104.3913v2
[38]
Michael Feldman, Haverford College, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact *. Technical Report. arxiv:1412.3756v3
[39]
Jack Fitzsimons, Abdul Rahman Al Ali, Michael Osborne, and Stephen Roberts. 2019. A General Framework for Fair Regression. Entropy 2019, Vol. 21, Page 741 21, 8 (jul 2019), 741. https://doi.org/10.3390/E21080741 arxiv:1810.05041
[40]
Milena A. Gianfrancesco, Suzanne Tamang, Jinoos Yazdany, and Gabriela Schmajuk. 2018. Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine 178, 11 (nov 2018), 1544. https://doi.org/10.1001/JAMAINTERNMED.2018.3763
[41]
Han Guo, Nazneen Fatema Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. 2021. FASTIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (2021), 10333–10350. https://doi.org/10.18653/V1/2021.EMNLP-MAIN.808 arxiv:2012.15781
[42]
Sara Hajian, Francesco Bonchi, and Carlos Castillo. 2016. Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Vol. 13-17-August-2016. Association for Computing Machinery, New York, NY, USA, 2125–2126. https://doi.org/10.1145/2939672.2945386
[43]
Sara Hajian and Josep Domingo-Ferrer. 2013. A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering 25, 7 (2013), 1445–1459. https://doi.org/10.1109/TKDE.2012.72
[44]
Frank R. Hampel. 1974. The Influence Curve and Its Role in Robust Estimation. J. Amer. Statist. Assoc. 69, 346 (jun 1974), 383. https://doi.org/10.2307/2285666
[45]
Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,. Association for Computational Linguistics (ACL), 5553–5563. https://doi.org/10.48550/arxiv.2005.06676 arxiv:2005.06676
[46]
Satoshi Hara, Atsushi Nitanda, and Takanori Maehara. 2019. Data Cleansing for Models Trained with SGD. Advances in Neural Information Processing Systems 32 (jun 2019). https://doi.org/10.48550/arxiv.1906.08473 arxiv:1906.08473
[47]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. Technical Report. arxiv:1610.02413v1
[48]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miroslav Dudík, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need. In CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3290605.3300830 arxiv:1812.05239v2
[49]
Vasileios Iosifidis, Besnik Fetahu, and Eirini Ntoutsi. 2020. FAE: A Fairness-Aware Ensemble Framework. Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 (feb 2020), 1375–1380. https://doi.org/10.48550/arxiv.2002.00695 arxiv:2002.00695
[50]
Heinrich Jiang and Ofir Nachum. 2019. Identifying and Correcting Label Bias in Machine Learning. (2019). arxiv:1901.04966http://arxiv.org/abs/1901.04966
[51]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (sep 2019), 389–399. https://doi.org/10.1038/s42256-019-0088-2
[52]
Jayakumar Kaliappan, Kathiravan Srinivasan, Saeed Mian Qaisar, Karpagam Sundararajan, Chuan Yu Chang, and C. Suganthan. 2021. Performance Evaluation of Regression Models for the Prediction of the COVID-19 Reproduction Rate. Frontiers in Public Health 9, September (2021), 1–12. https://doi.org/10.3389/fpubh.2021.729795
[53]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication, IC4 2009. https://doi.org/10.1109/IC4.2009.4909197
[54]
Faisal Kamiran, Asim Karim, and Xiangliang Zhang. 2012. Decision Theory for Discrimination-aware Classification. In IEEE 12th International Conference on Data Mining. 924–929. https://doi.org/10.1109/ICDM.2012.45
[55]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 7524 LNAI. Springer, Berlin, Heidelberg, 35–50. https://doi.org/10.1007/978-3-642-33486-3_3
[56]
Kimmo Karkkainen and Jungseock Joo. 2021. FairFace: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 (2021), 1547–1557. https://doi.org/10.1109/WACV48630.2021.00159
[57]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376219
[58]
Rajiv Khanna, Been Kim, Joydeep Ghosh, and Oluwasanmi Koyejo. 2019. Interpreting Black Box Predictions using Fisher Kernels. In Twenty-Second International Conference on Artificial Intelligence and Statistics. PMLR, 3382–3390. https://proceedings.mlr.press/v89/khanna19a.html
[59]
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding Discrimination through Causal Reasoning. In Proceedings of the 2017 Advances in Neural Information Processing Systems, Vol. 30.
[60]
Sosuke Kobayashi, Sho Yokoi, Jun Suzuki, and Kentaro Inui. 2020. Efficient Estimation of Influence of a Training Instance. Journal of Natural Language Processing 28, 2 (dec 2020), 573–597. https://doi.org/10.48550/arxiv.2012.04207 arxiv:2012.04207
[61]
Pang Wei Koh, Kai-Siang Ang, Hubert H K Teo, and Percy Liang. 2019. On the Accuracy of Influence Functions for Measuring Group Effects. In NIPS’19: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 5254–5264. https://doi.org/10.5555/3454287.3454759
[62]
Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions. In Proceedings of the 34th International Conference on Machine Learning. PMLR, 1885–1894. https://proceedings.mlr.press/v70/koh17a.html
[63]
Pang Wei Koh, Jacob Steinhardt, and Percy Liang. 2022. Stronger data poisoning attacks break data sanitization defenses. Machine Learning 111, 1 (jan 2022), 1–47. https://doi.org/10.1007/S10994-021-06119-Y/FIGURES/13 arxiv:1811.00741
[64]
Ronny Kohavi and Barry Becker. 1996. Census Income Data Set. http://archive.ics.uci.edu/ml/datasets/Adult
[65]
Shuming Kong, Yanyan Shen, and Linpeng Huang. 2022. Resolving Training Biases via Influence-based Data Relabeling. In ICLR.
[66]
Emmanouil Krasanakis, Eleftherios Spyromitros-Xioufis, Symeon Papadopoulos, and Yiannis Kompatsiaris. 2018. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 (apr 2018), 853–862. https://doi.org/10.1145/3178876.3186133
[67]
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. Advances in Neural Information Processing Systems 2017-Decem (mar 2017), 4067–4077. arxiv:1703.06856http://arxiv.org/abs/1703.06856
[68]
Kyriakos Kyriakou, Pınar Barlas, Styliani Kleanthous, and Jahna Otterbacher. 2019. Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images. Proceedings of the International AAAI Conference on Web and Social Media 13 (jul 2019), 313–322. https://doi.org/10.1609/ICWSM.V13I01.3232
[69]
Anja Lambrecht and Catherine Tucker. 2019. Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science 65, 7 (apr 2019), 2966–2981. https://doi.org/10.1287/MNSC.2018.3093
[70]
Po-Ming Law, Sana Malik, Fan Du, and Moumita Sinha. 2020. Designing Tools for Semi-Automated Detection of Machine Learning Biases: An Interview Study. In Proceedings of the CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems.
[71]
Tai Le Quy, Arjun Roy, Vasileios Iosifidis, Wenbin Zhang, and Eirini Ntoutsi. 2022. A survey on datasets for fairness-aware machine learning. WIREs Data Mining Knowledge Discovery 12, 3 (2022). https://doi.org/10.1002/widm.1452
[72]
Pranay K. Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R. Varshney, and Ruchir Puri. 2018. Bias Mitigation Post-processing for Individual and Group Fairness. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2019-May (dec 2018), 2847–2851. arxiv:1812.06135http://arxiv.org/abs/1812.06135
[73]
Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. 2011. k-NN as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM Press, New York, New York, USA, 502–510. https://doi.org/10.1145/2020408.2020488
[74]
David Madras, James Atwood, and A. D’Amour. 2019. Detecting Extrapolation with Influence Functions. In ICML Workshop.
[75]
Vidushi Marda and Shivangi Narayan. 2020. Data in New Delhi’s predictive policing system. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (jan 2020), 317–324. https://doi.org/10.1145/3351095.3372865
[76]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. (2019). arxiv:1908.09635v2https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[77]
Xiaoye Miao, Yangyang Wu, Lu Chen, Yunjun Gao, Jun Wang, and Jianwei Yin. 2021. Efficient and effective data imputation with influence functions. Proceedings of the VLDB Endowment 15, 3 (nov 2021), 624–632. https://doi.org/10.14778/3494124.3494143
[78]
Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism (first ed.). NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5
[79]
Simon Nusinovici, Yih Chung Tham, Marco Yu Chak Yan, Daniel Shu Wei Ting, Jialiang Li, Charumathi Sabanayagam, Tien Yin Wong, and Ching Yu Cheng. 2020. Logistic regression was as good as machine learning for predicting major chronic diseases. Journal of Clinical Epidemiology 122 (jun 2020), 56–69. https://doi.org/10.1016/J.JCLINEPI.2020.03.002
[80]
Ziad Obermeyer and Sendhil Mullainathan. 2019. Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People. In Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19. Association for Computing Machinery (ACM), New York, New York, USA, 89–89. https://doi.org/10.1145/3287560.3287593
[81]
Will Orr and Jenny L Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence practitioners. (2020). https://doi.org/10.1080/1369118X.2020.1713842
[82]
Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi. 2021. FairLens: Auditing black-box clinical decision support systems. Information Processing & Management 58, 5 (sep 2021), 102657. https://doi.org/10.1016/J.IPM.2021.102657 arxiv:2011.04049
[83]
Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Computing Surveys (CSUR) 55, 3 (feb 2022), 1–44. https://doi.org/10.1145/3494672
[84]
Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Post-processing for individual fairness. Advances in Neural Information Processing Systems 34 (2021), 25944–25955.
[85]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On Fairness and Calibration. In 31st Conference on Neural Information Processing Systems. Neural information processing systems foundation, 5681–5690. arxiv:1709.02012http://arxiv.org/abs/1709.02012
[86]
John Pruitt and Tamara Adlin. 2005. The Persona Lifecycle: Keeping People in Mind Throughout Product Design. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
[87]
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating Training Data Influence by Tracing Gradient Descent. In 34th Internation Conference on Neural Information Processing Systems. 19920–19930. https://doi.org/10.5555/3495724.3497396
[88]
Brianna Richardson, Jean Garcia-Gathright, and Samuel F. Way. 2021. Towards fairness in practice: A practitioner-oriented rubric for evaluating fair ml toolkits. Conference on Human Factors in Computing Systems - Proceedings (may 2021). https://doi.org/10.1145/3411764.3445604
[89]
Brianna Richardson and Juan E. Gilbert. 2021. A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions. (dec 2021). https://doi.org/10.48550/arxiv.2112.05700 arxiv:2112.05700
[90]
Pouria Rouzrokh, Bardia Khosravi, Shahriar Faghani, Mana Moassefi, Diana V.Vera Garcia, Yashbir Singh, Kuan Zhang, Gian Marco Conte, and Bradley J. Erickson. 2022. Mitigating Bias in Radiology Machine Learning: 1. Data Handling. https://doi.org/10.1148/ryai.210290 4, 5 (aug 2022). https://doi.org/10.1148/RYAI.210290
[91]
Ville Satopää, Jeannie Albrecht, David Irwin, and Barath Raghavan. 2011. Finding a "kneedle" in a haystack: Detecting knee points in system behavior. Proceedings - International Conference on Distributed Computing Systems (2011), 166–171. https://doi.org/10.1109/ICDCSW.2011.20
[92]
Prasanna Sattigeri, Soumya Ghosh, Inkit Padhi, Pierre Dognin, and Kush R. Varshney. 2022. Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting. In Advances in Neural Information Processing Systems. https://doi.org/10.48550/arxiv.2212.06803 arxiv:2212.06803
[93]
Richard Sawyer, Nancy Cole, and James Cole. 1976. Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection. Journal of Educational Measurement 13 (09 1976), 59 – 76. https://doi.org/10.1111/j.1745-3984.1976.tb00182.x
[94]
Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. 2021. Scaling Up Influence Functions. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (dec 2021), 8179–8186. https://doi.org/10.48550/arxiv.2112.03052 arxiv:2112.03052
[95]
Jakob Schoeffer and Niklas Kuehl. 2021. Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems. In 2021 Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’21 Companion), Vol. 1. Association for Computing Machinery. https://doi.org/10.1145/3462204.3481742 arxiv:2108.06500
[96]
Peter Schulam and Suchi Saria. 2019. Can You Trust This Prediction? Auditing Pointwise Reliability After Learning. AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics (jan 2019). https://doi.org/10.48550/arxiv.1901.00403 arxiv:1901.00403
[97]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Inc, New York, NY, USA, 59–68. https://doi.org/10.1145/3287560.3287598
[98]
Michelle Seng, Ah Lee, and Jatinder Singh. 2021. The Landscape and Gaps in Open Source Fairness Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445261
[99]
Shubham Sharma, Yunfeng Zhang, Jesus M. Rios Aliaga, Djallel Bouneffouf, Vinod Muthusamy, and Kush R. Varshney. 2020. Data augmentation for discrimination prevention and bias disambiguation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Feb. 2020), 358–364. https://doi.org/10.1145/3375627.3375865
[100]
Andrew Silva, Rohit Chopra, and Matthew Gombolay. 2022. Cross-Loss Influence Functions to Explain Deep Network Representations. In the 25th International Conference on Artificial Intelligence and Statistics (AISTATS). Valencia, Spain. https://www.researchgate.net/publication/346614859_Using_Cross-Loss_Influence_Functions_to_Explain_Deep_Network_Representations
[101]
Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified Defenses for Data Poisoning Attacks. Advances in Neural Information Processing Systems 2017-December (jun 2017), 3518–3530. https://doi.org/10.48550/arxiv.1706.03691 arxiv:1706.03691
[102]
Sarah Tan, Rich Caruana, Giles Hooker, and Yin Lou. 2018. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. AIES 2018 - Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (dec 2018), 303–310. https://doi.org/10.1145/3278721.3278725 arxiv:1710.06169
[103]
Kush R. Varshney. 2022. Trustworthy Machine Learning. Independently Published, Chappaqua, NY, USA.
[104]
Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017). https://doi.org/10.1177/2053951717743530
[105]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Conference on Human Factors in Computing Systems - Proceedings 2018-April (feb 2018). https://doi.org/10.1145/3173574.3174014 arxiv:1802.01029
[106]
Sam Waller, Mike Bradley, Ian Hosking, and P. John Clarkson. 2015. Making the case for inclusive design. Applied Ergonomics 46 (2015), 297–303. https://doi.org/10.1016/j.apergo.2013.03.012 Special Issue: Inclusive Design.
[107]
Hao Wang, Berk Ustun, and Flavio P Calmon. 2019. Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions. In Proceedings of the 36th International Conference on Machine Learning. http://github.com/ustunb/ctfdist.
[108]
Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. 2020. Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation. In Conference on Computer Vision and Pattern Recognition. 8916–8925. arxiv:1911.11834
[109]
Jameson Watts and Anastasia Adriano. 2020. Uncovering the Sources of Machine-Learning Mistakes in Advertising: Contextual Bias in the Evaluation of Semantic Relatedness. Journal of Advertising 50, 1 (2020), 26–38. https://doi.org/10.1080/00913367.2020.1821411
[110]
William Webber, Alistair Moffat, and Justin Zobel. 2010. A similarity measure for indefinite rankings. ACM Transactions on Information Systems (TOIS) 28, 4 (nov 2010). https://doi.org/10.1145/1852102.1852106
[111]
Linda F. Wightman. 1998. LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series.
[112]
Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive Inequity in Object Detection. (feb 2019). arxiv:1902.11097http://arxiv.org/abs/1902.11097
[113]
Wenying Wu, Panagiotis Michalatos, Pavlos Protopapaps, and Zheng Yang. 2020. Gender Classification and Bias Mitigation in Facial Images. WebSci 2020 - Proceedings of the 12th ACM Conference on Web Science (jul 2020), 106–114. https://doi.org/10.1145/3394231.3397900
[114]
Yongkai Wu, Lu Zhang, and Xintao Wu. 2019. On convexity and bounds of fairness-aware classification. The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (may 2019), 3356–3362. https://doi.org/10.1145/3308558.3313723
[115]
Mohammad Yaghini, Andreas Krause, and Hoda Heidari. 2019. A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness. (nov 2019). arxiv:1911.03020http://arxiv.org/abs/1911.03020
[116]
Shen Yan, Hsien Te Kao, and Emilio Ferrara. 2020. Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes. International Conference on Information and Knowledge Management, Proceedings (oct 2020), 1715–1724. https://doi.org/10.1145/3340531.3411980
[117]
Ke Yang, Joshua R. Loftus, and Julia Stoyanovich. 2020. Causal intersectionality for fair ranking. Leibniz International Proceedings in Informatics, LIPIcs 192 (jun 2020). https://doi.org/10.48550/arxiv.2006.08688 arxiv:2006.08688
[118]
Yao-Yuan Yang, Chi-Ning Chou, and Kamalika Chaudhuri. 2022. Understanding Rare Spurious Correlations in Neural Networks. (feb 2022). https://doi.org/10.48550/arxiv.2202.05189 arxiv:2202.05189
[119]
Jiangxing Yu, Hong Zhu, Chih Yao Chang, Xinhua Feng, Bowen Yuan, Xiuqiang He, and Zhenhua Dong. 2020. Influence Function for Unbiased Recommendation. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (jul 2020), 1929–1932. https://doi.org/10.1145/3397271.3401321
[120]
Mikhail Yurochkin, Amanda Bower, and Yuekai Sun. 2020. Training individually fair ML models with sensitive subspace robustness. In International Conference on Learning Representations.
[121]
Richard Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning.
[122]
Rui Zhang and Shihua Zhang. 2022. Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (2022), 9082–9090. https://doi.org/10.1609/aaai.v36i8.20893 arxiv:2112.08297
[123]
Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. 2021. Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features. WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining (apr 2021), 1433–1442. https://doi.org/10.1145/3488560.3498493 arxiv:2104.14537v4

Cited By

View all
  • (2024)FairDRO: Group fairness regularization via classwise robust optimizationNeural Networks10.1016/j.neunet.2024.106891(106891)Online publication date: Nov-2024
  • (2024)Towards robust neural networks: Exploring counterfactual causality-based repairExpert Systems with Applications10.1016/j.eswa.2024.125082257(125082)Online publication date: Dec-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bias mitigation
  2. ethics
  3. fairness
  4. machine learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)172
  • Downloads (Last 6 weeks)19
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)FairDRO: Group fairness regularization via classwise robust optimizationNeural Networks10.1016/j.neunet.2024.106891(106891)Online publication date: Nov-2024
  • (2024)Towards robust neural networks: Exploring counterfactual causality-based repairExpert Systems with Applications10.1016/j.eswa.2024.125082257(125082)Online publication date: Dec-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media