Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3442188.3445912acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

Published: 01 March 2021 Publication History

Abstract

Decision-making systems increasingly orchestrate our world: how to intervene on the algorithmic components to build fair and equitable systems is therefore a question of utmost importance; one that is substantially complicated by the context-dependent nature of fairness and discrimination. Modern decision-making systems that involve allocating resources or information to people (e.g., school choice, advertising) incorporate machine-learned predictions in their pipelines, raising concerns about potential strategic behavior or constrained allocation, concerns usually tackled in the context of mechanism design. Although both machine learning and mechanism design have developed frameworks for addressing issues of fairness and equity, in some complex decision-making systems, neither framework is individually sufficient. In this paper, we develop the position that building fair decision-making systems requires overcoming these limitations which, we argue, are inherent to each field. Our ultimate objective is to build an encompassing framework that cohesively bridges the individual frameworks of mechanism design and machine learning. We begin to lay the ground work towards this goal by comparing the perspective each discipline takes on fair decision-making, teasing out the lessons each field has taught and can teach the other, and highlighting application domains that require a strong collaboration between these disciplines.

References

[1]
Guilty Plea Problem. The Innocence Project. URL www.guiltypleaproblem.org.
[2]
Public Law 111--148: Patient Protection and Affordable Care Act. https://www.govinfo.gov/app/details/PLAW-111publ148/, 2010.
[3]
Blacks, Hispanics more likely to pay higher mortgage rates. The Pew Research Center, 2017.
[4]
Risk adjustment. https://www.cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/Risk-Adjustors, 2018.
[5]
Atila Abdulkadiroğlu. College admissions with affirmative action. International Journal of Game Theory, 33(4):535--549, 2005.
[6]
Atila Abdulkadiroğlu and Tayfun Sönmez. School choice: A mechanism design approach. American Economic Review, 93(3):729--747, 2003.
[7]
Rediet Abebe and Kira Goldner. Mechanism design for social good. AI Matters, 4(3):27--34, 2018.
[8]
Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G Robinson. Roles for computing in social change. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 252--260, 2020.
[9]
Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. Discrimination through Optimization: How Facebook's Ad Delivery Can Lead to Biased Outcomes. Proc. ACM Hum.-Comput. Interact., 3(CSCW), November 2019. URL https://doi.org/10.1145/3359301.
[10]
Tal Alon, Magdalen Dobson, Ariel D Procaccia, Inbal Talgam-Cohen, and Jamie Tucker-Foltz. Multiagent evaluation mechanisms. In Association for the Advancement of Artificial Intelligence, pages 1774--1781, 2020.
[11]
Joseph G Altonji and Charles R Pierret. Employer learning and statistical discrimination. The quarterly journal of economics, 116(1):313--350, 2001.
[12]
Julia Angwin and Terry Parris Jr. Facebook lets advertisers exclude users by race. ProPublica blog, 28, 2016.
[13]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, May, 23:2016, 2016.
[14]
Julia Angwin, Noam Scheiber, and Ariana Tobin. Facebook job ads raise concerns about age discrimination. The New York Times, 20:1, 2017.
[15]
Kenneth Arrow. The theory of discrimination. Discrimination in labor markets, 3(10):3--33, 1973.
[16]
Chen Avin, Barbara Keller, Zvi Lotker, Claire Mathieu, David Peleg, and Yvonne-Anne Pignolet. Homophily and the glass ceiling effect in social networks. In Conference on Innovations in Theoretical Computer Science, pages 41--50, 2015.
[17]
Chen Avin, Avi Cohen, Pierre Fraigniaud, Zvi Lotker, and David Peleg. Preferential attachment as a unique equilibrium. In World Wide Web Conference, pages 559--568, 2018.
[18]
Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. The moral machine experiment. Nature, 563(7729):59--64, 2018.
[19]
Gal Bahar, Rann Smorodinsky, and Moshe Tennenholtz. Economic Recommendation Systems: One Page Abstract. In Proceedings of the 2016 ACM Conference on Economics and Computation, EC '16, page 757, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450339360. URL https://doi.org/10.1145/2940716.2940719.
[20]
Maria-Florina F Balcan, Travis Dick, Ritesh Noothigattu, and Ariel D Procaccia. Envy-free classification. In Advances in Neural Information Processing Systems, pages 1238--1248, 2019.
[21]
Solon Barocas and Andrew D Selbst. Big data's disparate impact. Calif. L. Rev., 104:671, 2016.
[22]
Solon Barocas, Andrew D Selbst, and Manish Raghavan. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 80--89, 2020.
[23]
Robert Bartlett, Adair Morse, Richard Stanton, and Nancy Wallace. Consumer-lending discrimination in the fintech era. Technical report, National Bureau of Economic Research, 2019.
[24]
Shifra Baruchson-Arbib and Judit Bar-Ilan. Manipulating search engine algorithms: the case of google. Journal of Information, Communication and Ethics in Society, 2007.
[25]
Arianne Renan Barzilay and Anat Ben-David. Platform inequality: gender in the gig-economy. Seton Hall L. Rev., 47:393, 2016.
[26]
Ran Ben Basat, Moshe Tennenholtz, and Oren Kurland. A game theoretic analysis of the adversarial retrieval setting. Journal of Artificial Intelligence Research, 60:1127--1164, 2017.
[27]
Sid Basu, Ruthie Berman, Adam Bloomston, John Campbell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton. Measuring discrepancies in airbnb guest acceptance rates using anonymized demographic data. Technical report, Airbnb, 2020.
[28]
Gordon Baxter and Ian Sommerville. Socio-technical systems: From design methods to systems engineering. Interacting with computers, 23(1):4--17, 2011.
[29]
Gary S Becker. The economics of discrimination. University of Chicago Press, 1957.
[30]
Ran Ben Basat, Moshe Tennenholtz, and Oren Kurland. The probability ranking principle is not optimal in adversarial retrieval settings. In Proceedings of the 2015 International Conference on The Theory of Information Retrieval, pages 51--60, 2015.
[31]
Omer Ben-Porat and Moshe Tennenholtz. A game-theoretic approach to recommendation systems with strategic content providers. In Advances in Neural Information Processing Systems, pages 1110--1120, 2018.
[32]
Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. European Journal of Operational Research, 2020.
[33]
Carlos Berdejó. Criminalizing race: Racial disparities in plea-bargaining. BCL Rev., 59:1187, 2018.
[34]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research.
[35]
Marianne Bertrand and Sendhil Mullainathan. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4):991--1013, 2004.
[36]
Reuben Binns. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 ACM Conference on Fairness, Accountability and Transparency, pages 149--159, 2018.
[37]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 'it's reducing a human being to a percentage' perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1--14, 2018.
[38]
Sarah Bird, Solon Barocas, Kate Crawford, Fernando Diaz, and Hanna Wallach. Exploring or exploiting? Social and ethical implications of autonomous experimentation in AI. In Workshop on Fairness, Accountability, and Transparency in Machine Learning, 2016.
[39]
Avrim Blum, Jeffrey Jackson, Tuomas Sandholm, and Martin Zinkevich. Preference elicitation and query learning. Journal of Machine Learning Research, 5 (Jun):649--667, 2004.
[40]
Miranda Bogen and Aaron Rieke. Help wanted: An examination of hiring algorithms, equity, and bias. Upturn, 2018.
[41]
J Aislinn Bohren, Alex Imas, and Michael Rosenberg. The dynamics of discrimination: Theory and evidence. American Economic Review, 109(10):3395--3436, 2019.
[42]
Michael Brückner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 547--555, 2011.
[43]
Michael Brückner, Christian Kanzow, and Tobias Scheffer. Static prediction games for adversarial learning problems. The Journal of Machine Learning Research, 13(1):2617--2654, 2012.
[44]
Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 2018 ACM Conference on Fairness, Accountability and Transparency, pages 77--91, 2018.
[45]
Antoni Calvo-Armengol and Matthew O Jackson. The effects of social networks on employment and inequality. American economic review, 94(3):426--454, 2004.
[46]
L Elisa Celis, Lingxiao Huang, and Nisheeth K Vishnoi. Multiwinner voting with fairness constraints. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018.
[47]
L Elisa Celis, Sayash Kapoor, Farnood Salehi, and Nisheeth Vishnoi. Controlling polarization in personalization: An algorithmic framework. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 160--169, 2019.
[48]
L Elisa Celis, Anay Mehrotra, and Nisheeth K Vishnoi. Toward controlling discrimination in online ad auctions. In Proceedings of the 36th International Conference on Machine Learning, 2019.
[49]
Hector Chade, Gregory Lewis, and Lones Smith. Student portfolios and the college admissions problem. Review of Economic Studies, 81(3):971--1002, 2014.
[50]
Abhijnan Chakraborty, Gourab K Patro, Niloy Ganguly, Krishna P Gummadi, and Patrick Loiseau. Equality of voice: Towards fair representation in crowdsourced top-k recommendations. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 129--138, 2019.
[51]
Jimmy Chan and Erik Eyster. Does banning affirmative action lower college student quality? American Economic Review, 93(3):858--872, 2003.
[52]
Kerwin Kofi Charles and Jonathan Guryan. Prejudice and wages: an empirical assessment of becker's the economics of discrimination. Journal of political economy, 116(5):773--809, 2008.
[53]
Shuchi Chawla and Meena Jagadeesan. Fairness in ad auctions through inverse proportionality. arXiv preprint arXiv:2003.13966, 2020.
[54]
Xi Chen, Yiqun Liu, Liang Zhang, and Krishnaram Kenthapadi. How LinkedIn economic graph bonds information and product: applications in LinkedIn salary. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 120--129, 2018.
[55]
Y Chen, Y Liu, and C Podimata. Learning strategy-aware linear classifiers. arXiv preprint arXiv:1911.04004, 2020.
[56]
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pages 5029--5037, 2017.
[57]
Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2):153--163, 2017.
[58]
Stephen Coate and Glenn C Loury. Will affirmative-action policies eliminate negative stereotypes? The American Economic Review, pages 1220--1240, 1993.
[59]
Vincent Conitzer, Rupert Freeman, Nisarg Shah, and Jennifer Wortman Vaughan. Group fairness for the allocation of indivisible goods. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1853--1860, 2019.
[60]
Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv: 1808.00023, 2018.
[61]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797--806. ACM, 2017.
[62]
Bo Cowgill. Bias and productivity in humans and algorithms: Theory and evidence from resume screening. Columbia Business School, Columbia University, 29, 2018.
[63]
Ruomeng Cui, Jun Li, and Dennis Zhang. Discrimination with incomplete information in the sharing economy: Evidence from field experiments on airbnb. Harvard Business School, pages 1--35, 2017.
[64]
Rob Cunningham. Risk adjustment in health insurance. Health Affairs Policy Brief, 2012.
[65]
Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, and Deepak Verma. Adversarial classification. In Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 99--108, 2004.
[66]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1):92--112, 2015.
[67]
William Dieterich, Christina Mendoza, and Tim Brennan. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc, 2016.
[68]
Paul DiMaggio and Filiz Garip. How network externalities can exacerbate intergroup inequality. American Journal of Sociology, 116(6):1887--1933, 2011.
[69]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces, pages 275--285, 2019.
[70]
Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55--70, 2018.
[71]
Emily Dreyfuss. Facebook changes its ad tech to stop discrimination. Wired, March 2019. URL https://www.wired.com/story/facebook-advertising-discrimination-settlement/.
[72]
Paul Dütting, Zhe Feng, Harikrishna Narasimhan, David Parkes, and Sai Srivatsa Ravindranath. Optimal auctions through deep learning. In International Conference on Machine Learning, pages 1706--1715, 2019.
[73]
Cynthia Dwork and Christina Ilvento. Fairness under composition. In 10th Innovations in Theoretical Computer Science, 2019.
[74]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214--226, 2012.
[75]
Ronald Dworkin. Sovereign virtue: The theory and practice of equality. Harvard University Press, 2002.
[76]
Benjamin Edelman, Michael Luca, and Dan Svirsky. Racial discrimination in the sharing economy: Evidence from a field experiment. American Economic Journal: Applied Economics, 9(2):1--22, 2017.
[77]
Edith Elkind, Piotr Faliszewski, Piotr Skowron, and Arkadii Slinko. Properties of multiwinner voting rules. Social Choice and Welfare, 48(3):599--632, 2017.
[78]
Vitalii Emelianov, Nicolas Gast, Krishna P Gummadi, and Patrick Loiseau. On fair selection in the presence of implicit variance. In Proceedings of the 21st ACM Conference on Economics and Computation, pages 649--675, 2020.
[79]
Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. Runaway feedback loops in predictive policing. In Proceedings of the 2018 ACM Conference on Fairness, Accountability and Transparency, pages 160--171, 2018.
[80]
Robert Epstein and Ronald E Robertson. The search engine manipulation effect (seme) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112(33):E4512-E4521, 2015.
[81]
Irfan Faizullabhoy and Aleksandra Korolova. Facebook's advertising platform: New attack vectors and the need for interventions. arXiv preprint arXiv:1803.10099, 2018.
[82]
Hanming Fang and Andrea Moro. Theories of statistical discrimination and affirmative action: A survey. In Handbook of social economics, volume 1, pages 133--200. Elsevier, 2011.
[83]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259--268, 2015.
[84]
Zhe Feng, Harikrishna Narasimhan, and David C Parkes. Deep learning for revenue-optimal auctions with budgets. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, pages 354--362, 2018.
[85]
Benjamin Fish, Ashkan Bashardoust, Danah Boyd, Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. Gaps in Information Access in Social Networks? In The World Wide Web Conference, pages 480--490, 2019.
[86]
Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation, 80:38, 2016.
[87]
Duncan Karl Foley. Resource allocation and the public sector. 1967.
[88]
Dean P Foster and Rakesh V Vohra. An economic argument for affirmative action. Rationality and Society, 4(2):176--188, 1992.
[89]
Rafael Frongillo and Bo Waggoner. An axiomatic study of scoring rule markets. In Innovations in Theoretical Computer Science Conference (ITCS). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
[90]
Qiang Fu. A theory of affirmative action in college admissions. Economic Inquiry, 44(3):420--428, 2006.
[91]
Nikhil Garg, Hannah Li, and Faidra Monachou. Standardized tests and affirmative action: The role of bias and variance. arXiv preprint arXiv:2010.04396, 2020.
[92]
Lodewijk Gelauff, Ashish Goel, Kamesh Munagala, and Sravya Yandamuri. Advertising for demographically fair outcomes. arXiv preprint arXiv:2006.03983, 2020.
[93]
Michael Geruso, Timothy Layton, and Daniel Prinz. Screening in contract design: Evidence from the ACA health insurance exchanges. American Economic Journal: Economic Policy, 11(2):64--107, 2019.
[94]
Paul W Goldberg, Edwin Lock, and Francisco Marmolejo-Cossío. Learning strong substitutes demand via queries. arXiv preprint arXiv:2005.01496, 2020.
[95]
Dipayan Gosh. Ai is the future of hiring, but it's far from immune to bias. Quartz at Work, 17, 2017.
[96]
Ben Green and Yiling Chen. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pages 90--99, 2019.
[97]
Nina Grgic-Hlaca, Elissa M Redmiles, Krishna P Gummadi, and Adrian Weller. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference, pages 903--912, 2018.
[98]
Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[99]
Didem Gündoğdu, Pietro Panzarasa, Nuria Oliver, and Bruno Lepri. The bridging and bonding structures of place-centric networks: Evidence from a developing country. PloS one, 14(9):e0221148, 2019.
[100]
Jonathan Guryan and Kerwin Kofi Charles. Taste-based or statistical discrimination: the economics of discrimination returns to its roots. The Economic Journal, 123(572):F417-F432, 2013.
[101]
Nika Haghtalab, Nicole Immorlica, Brendan Lucier, and Jack Wang. Maximizing welfare with incentive-aware evaluation mechanisms. In 29th International Joint Conference on Artificial Intelligence, 2020.
[102]
Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. Bias in online freelance marketplaces: Evidence from TaskRabbit and Fiverr. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pages 1914--1933, 2017.
[103]
Bernard E Harcourt. Against prediction: Profiling, policing, and punishing in an actuarial age. University of Chicago Press, 2008.
[104]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 111--122, 2016.
[105]
Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pages 3315--3323, 2016.
[106]
Galen Harrison, Julia Hanson, Christine Jacinto, Julio Ramirez, and Blase Ur. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 392--402, 2020.
[107]
Hoda Heidari, Vedant Nanda, and Krishna P Gummadi. On the long-term impact of algorithmic decision policies: Effort unfairness and feature segregation through social learning. In Proceedings of the 36th International Conference on Machine Learning, 2019.
[108]
Zoë Hitzig. The Normative Gap: Mechanism Design and Ideal Theories of Justice. Economics & Philosophy, Forthcoming.
[109]
Safwan Hossain, Andjela Mladenovic, and Nisarg Shah. Designing fairly fair classifiers via economic fairness notions. In Proceedings of The Web Conference 2020, pages 1559--1569, 2020.
[110]
Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference, pages 1389--1398, 2018.
[111]
Lily Hu and Yiling Chen. Fair classification and social welfare. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 535--545, 2020.
[112]
Lily Hu, Nicole Immorlica, and Jennifer Wortman Vaughan. The disparate effects of strategic manipulation. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 259--268, 2019.
[113]
Christina Ilvento, Meena Jagadeesan, and Shuchi Chawla. Multi-category fairness in sponsored search auctions. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 348--358, 2020.
[114]
Nicole Immorlica, Katrina Ligett, and Juba Ziani. Access to population-level signaling as a source of inequality. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 249--258, 2019.
[115]
Nicole Immorlica, Jieming Mao, and Christos Tzamos. Diversity and exploration in social learning. In The World Wide Web Conference, pages 762--772, 2019.
[116]
Douglas Bernard Jacobs and Benjamin Daniel Sommers. Using drugs to discriminate---adverse selection in the insurance marketplace. New England Journal of Medicine, 2015.
[117]
Abby Everett Jaques. Why the moral machine is a monster. University of Miami School of Law, 10, 2019.
[118]
Madhura Jayaratne and Buddhi Jayatilleke. Predicting job-hopping likelihood using answers to open-ended interview questions. arXiv preprint arXiv:2007.11189, 2020.
[119]
Philip B Jones, Jonathan Levy, Jeniifer Bosco, John Howat, and John W Van Alst. The future of transportation electrification: Utility, industry and consumer perspectives. Technical report, Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States), 2018.
[120]
Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. Fair algorithms for infinite and contextual bandits. arXiv preprint arXiv:1610.09559, 2016.
[121]
Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, pages 325--333, 2016.
[122]
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M Pai, Aaron Roth, and Rakesh Vohra. Fair prediction with endogenous behavior. ACM Conference on Economics and Computation 2020, 2020.
[123]
Anson Kahng, Min Kyung Lee, Ritesh Noothigattu, Ariel Procaccia, and Christos-Alexandros Psomas. Statistical foundations of virtual democracy. In International Conference on Machine Learning, pages 3173--3182, 2019.
[124]
Nathan Kallus and Angela Zhou. Residual unfairness in fair machine learning from prejudiced data. In International Conference on Machine Learning, pages 2439--2448, 2018.
[125]
Yuichiro Kamada and Fuhito Kojima. Fair matching under constraints: Theory and applications. Technical report, 2019.
[126]
Sampath Kannan, Aaron Roth, and Juba Ziani. Downstream effects of affirmative action. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 240--248, 2019.
[127]
Louis Kaplow and Steven Shavell. Fairness versus welfare: notes on the Pareto principle, preferences, and distributive justice. The Journal of Legal Studies, 32 (1):331--362, 2003.
[128]
Maximilian Kasy and Rediet Abebe. Fairness, equality, and power in algorithmic decision-making. Technical report, Working paper, 2020.
[129]
Michael Kearns, Aaron Roth, and Zhiwei Steven Wu. Meritocratic fairness for cross-population selection. In International Conference on Machine Learning, pages 1828--1836, 2017.
[130]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in neural information processing systems, pages 656--666, 2017.
[131]
Niki Kilbertus, Manuel Gomez Rodriguez, Bernhard Schölkopf, Krikamol Muandet, and Isabel Valera. Fair decisions despite imperfect predictions. In International Conference on Artificial Intelligence and Statistics, pages 277--287, 2020.
[132]
Michael P Kim, Aleksandra Korolova, Guy N Rothblum, and Gal Yona. Preference-informed fairness. In Innovations in Theoretical Computer Science, 2020.
[133]
Jon Kleinberg and Manish Raghavan. How do classifiers induce agents to invest effort strategically? In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 825--844, 2019.
[134]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent tradeoffs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
[135]
Matthäus Kleindessner, Pranjal Awasthi, and Jamie Morgenstern. Fair k-center clustering for data summarization. In International Conference on Machine Learning, pages 3448--3457, 2019.
[136]
John Knowles, Nicola Persico, and Petra Todd. Racial bias in motor vehicle searches: Theory and evidence. Journal of Political Economy, 109(1):203--229, 2001.
[137]
John Logan Koepke and David G Robinson. Danger ahead: Risk assessment and the future of bail reform. Wash. L. Rev., 93:1725, 2018.
[138]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4066--4076, 2017.
[139]
Sebastien M Lahaie and David C Parkes. Applying learning algorithms to preference elicitation. In Proceedings of the 5th ACM conference on Electronic commerce, pages 180--188, 2004.
[140]
Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. ifair: Learning individually fair data representations for algorithmic decision making. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 1334--1345. IEEE, 2019.
[141]
Anja Lambrecht and Catherine Tucker. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Management Science, 65(7):2966--2981, 2019.
[142]
Min Kyung Lee. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 2018.
[143]
Jonathan Levin. The dynamics of collective reputation. The BE Journal of Theoretical Economics, 9(1), 2009.
[144]
Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. In Proceedings of the 35th International Conference on Machine Learning, 2018.
[145]
Lydia T Liu, Ashia Wilson, Nika Haghtalab, Adam Tauman Kalai, Christian Borgs, and Jennifer Chayes. The disparate equilibria of algorithmic decision making when individuals invest rationally. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 381--391, 2020.
[146]
Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--14, 2020.
[147]
Jan R Magnus, Victor M Polterovich, Dmitri L Danilov, and Alexei V Savvateev. Tolerance of cheating: An analysis across countries. The Journal of Economic Education, 33(2):125--135, 2002.
[148]
Whitney Mallett. Behind the color-blind diversity algorithm for college admissions. 2014. URL https://www.vice.com/en/article/nzee5d/behind-the-color-blind-college-admissions-diversity-algorithm.
[149]
Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415--444, 2001.
[150]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635, 2019.
[151]
John Miller, Smitha Milli, and Moritz Hardt. Strategic classification is causal modeling in disguise. In Proceedings of the 37th International Conference on Machine Learning, 2020.
[152]
Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt. The social cost of strategic classification. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 230--239, 2019.
[153]
Smitha Milli, Ludwig Schmidt, Anca D Dragan, and Moritz Hardt. Model reconstruction from model explanations. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 1--9, 2019.
[154]
Martin Mladenov, Elliot Creager, Omer Ben-Porat, Kevin Swersky, Richard Zemel, and Craig Boutilier. Optimizing Long-term Social Welfare in Recommender Systems:A Constrained Matching Approach. In Proceedings of the Thirty-seventh International Conference on Machine Learning (ICML-20), Vienna, Austria, 2020. to appear.
[155]
Faidra Monachou and Itai Ashlagi. Discrimination in Online Markets: Effects of Social Bias on Learning from Reviews and Policy Design. In Advances in Neural Information Processing Systems, pages 2142--2152, 2019.
[156]
Milad Nasr and Michael Carl Tschantz. Bidding strategies with gender nondiscrimination constraints for online ad auctions. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 337--347, 2020.
[157]
Ritesh Noothigattu, Snehalkumar S Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel D Procaccia. A voting-based system for ethical decision making. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[158]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464):447--453, 2019.
[159]
Chika O Okafor. All things equal? social networks as a mechanism for discrimination. arXiv preprint arXiv:2006.15988, 2020.
[160]
Cathy O'Neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
[161]
Michael Ostrovsky and Michael Schwarz. Reserve prices in internet advertising auctions: A field experiment. In Proceedings of the 12th ACM Conference on Electronic commerce, pages 59--60, 2011.
[162]
David C Parkes. Auction design with costly preference elicitation. Annals of Mathematics and Artificial Intelligence, 44(3):269--302, 2005.
[163]
Parag A Pathak. What really matters in designing school choice mechanisms. Advances in Economics and Econometrics, 1:176--214, 2017.
[164]
Janice Payan, James Reardon, and Denny E McCorkle. The effect of culture on the academic honesty of marketing and business students. Journal of Marketing Education, 32(3):275--291, 2010.
[165]
Edmund S Phelps. The statistical theory of racism and sexism. The American Economic Review, 62(4):659--661, 1972.
[166]
Richard A Posner. The economics of justice. Harvard University Press, 1983.
[167]
Emily Pronin, Daniel Y Lin, and Lee Ross. The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3):369--381, 2002.
[168]
Lincoln Quillian, Devah Pager, Ole Hexel, and Arnfinn H Midtbøen. Metaanalysis of field experiments shows no change in racial discrimination in hiring over time. Proceedings of the National Academy of Sciences, 114(41):10870--10875, 2017.
[169]
Emilee Rader, Kelley Cotter, and Janghee Cho. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems, pages 1--13, 2018.
[170]
Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, and Zhiwei Steven Wu. The externalities of exploration and how data diversity helps exploitation. arXiv preprint arXiv:1806.00543, 2018.
[171]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, pages 469--481, 2020.
[172]
Ashesh Rambachan and Jonathan Roth. Bias in, bias out? Evaluating the folk wisdom. arXiv preprint arXiv:1909.08518, 2019.
[173]
John Rawls. A theory of justice. Harvard university press, 2009.
[174]
Samantha Robertson and Niloufar Salehi. What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design. arXiv preprint arXiv:2007.06718, 2020.
[175]
Alex Rosenblat, Karen EC Levy, Solon Barocas, and Tim Hwang. Discriminating tastes: Uber's customer ratings as vehicles for workplace discrimination. Policy & Internet, 9(3):256--279, 2017.
[176]
Alvin E Roth. Deferred acceptance algorithms: History, theory, practice, and open questions. International Journal of Game Theory, 36(3-4):537--569, 2008.
[177]
Debjani Saha, Candice Schumann, Duncan C McElfresh, John P Dickerson, Michelle L Mazurek, and Michael Carl Tschantz. Measuring non-expert comprehension of machine learning fairness metrics. In International Conference on Machine Learning, 2020.
[178]
Sima Sajjadiani, Aaron J Sojourner, John D Kammeyer-Mueller, and Elton Mykerezi. Using machine learning to translate applicant work history into predictors of performance and turnover. Journal of Applied Psychology, 2019.
[179]
Piotr Sapiezynski, Avijit Gosh, Levi Kaplan, Alan Mislove, and Aaron Rieke. Algorithms that "Don't See Color": Comparing Biases in Lookalike and Special Ad Audiences. arXiv preprint arXiv:1912.07579, 2019.
[180]
Lauren Saunders. FinTech and Consumer Protection: A Snapshot. 2019.
[181]
Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C Parkes, and Yang Liu. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 99--106, 2019.
[182]
Candice Schumann, Zhi Lang, Nicholas Mattei, and John P Dickerson. Group fairness in bandit arm selection. arXiv preprint arXiv:1912.03802, 2019.
[183]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. In Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, pages 59--68, 2019.
[184]
Junshuai Song, Zhao Li, Zehong Hu, Yucheng Wu, Zhenpeng Li, Jian Li, and Jun Gao. PoisonRec: An Adaptive Data Poisoning Framework for Attacking Black-box Recommender Systems. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pages 157--168. IEEE, 2020.
[185]
Till Speicher, Muhammad Ali, Giridhari Venkatadri, Filipe Ribeiro, George Arvanitakis, Fabrício Benevenuto, Krishna Gummadi, Patrick Loiseau, and Alan Mislove. Potential for discrimination in online targeted advertising. In Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency, volume 81, pages 1--15, 2018.
[186]
Chandler Nicholle Spinks. Contemporary Housing Discrimination: Facebook, Targeted Advertising, and the Fair Housing Act. Hous. L. Rev., 57:925, 2019.
[187]
Megha Srivastava, Hoda Heidari, and Andreas Krause. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2459--2468, 2019.
[188]
Hugo Steihaus. The problem of fair division. Econometrica, 16:101--104, 1948.
[189]
Megan Stevenson. Assessing risk assessment in action. Minn. L. Rev., 103:303, 2018.
[190]
Ana-Andreea Stoica, Christopher Riederer, and Augustin Chaintreau. Algorithmic glass ceiling in social networks: The effects of social recommendations on network diversity. In Proceedings of the 2018 World Wide Web Conference, pages 923--932, 2018.
[191]
Ana-Andreea Stoica, Jessy Xinyi Han, and Augustin Chaintreau. Seeding network influence in biased networks and the benefits of diversity. In Proceedings of The Web Conference 2020, pages 2089--2098, 2020.
[192]
Latanya Sweeney. Discrimination in online ad delivery. Queue, 11(3):10--29, 2013.
[193]
Pingzhong Tang. Reinforcement mechanism design. In IJCAI, pages 5146--5150, 2017.
[194]
The AI Now Institute. AI Now 2019 report. Technical report, 2019.
[195]
Ariana Tobin. Facebook changes its ad tech to stop discrimination. ProPublica, March 2019. URL https://www.propublica.org/article/hud-sues-facebook-housing-discrimination-advertising-algorithms.
[196]
Stratis Tsirtsis and Manuel Gomez-Rodriguez. Decisions, counterfactual explanations and strategic behavior. In 34th Conference on Neural Information Processing Systems, 2020.
[197]
Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. Optimal decision making under strategic behavior. arXiv preprint arXiv:1905.09239, 2020.
[198]
Isabel Valera, Adish Singla, and Manuel Gomez Rodriguez. Enhancing the accuracy and fairness of human decision making. In Advances in Neural Information Processing Systems, pages 1769--1778, 2018.
[199]
Niels Van Berkel, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M Kelly, and Vassilis Kostakos. Crowdsourcing perceptions of fair predictors for machine learning: a recidivism case study. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1-21, 2019.
[200]
Koen H Van Dam, Igor Nikolic, and Zofia Lukszo. Agent-based modelling of socio-technical systems, volume 9. Springer Science & Business Media, 2012.
[201]
Hal R Varian. Equity, envy, and efficiency. 1973.
[202]
Sahil Verma and Julia Rubin. Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pages 1--7. IEEE, 2018.
[203]
Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 2017.
[204]
Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1--14, 2020.
[205]
Min Wen, Osbert Bastani, and Ufuk Topcu. Fairness with dynamics. arXiv preprint arXiv:1901.08568, 2019.
[206]
Christine Wenneras and Agnes Wold. Nepotism and sexism in peer-review. Women, Science and Technology: A Reader in Feminist Science Studies, pages 46--52, 2001.
[207]
Allison Woodruff, Sarah E Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1--14, 2018.
[208]
Mohammad Yaghini, Hoda Heidari, and Andreas Krause. A human-in-the-loop framework to construct context-dependent mathematical formulations of fairness. arXiv preprint arXiv:1911.03020, 2019.
[209]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, pages 1171--1180, 2017.
[210]
Muhammad Bilal Zafar, Isabel Valera, Manuel Rodriguez, Krishna Gummadi, and Adrian Weller. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems, pages 229--239, 2017.
[211]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325--333, 2013.
[212]
Xueru Zhang and Mingyan Liu. Fairness in Learning-Based Sequential Decision Algorithms: A Survey. arXiv preprint arXiv:2001.04861, 2020.
[213]
Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C Parkes, and Richard Socher. The ai economist: Improving equality and productivity with ai-driven tax policies. arXiv preprint arXiv:2004.13332, 2020.
[214]
Anna Zink and Sherri Rose. Fair regression for healthcare spending. Biometrics, 76(3):973--982, 2020.
[215]
Martin A Zinkevich, Avrim Blum, and Tuomas Sandholm. On polynomial-time preference elicitation with value queries. In Proceedings of the 4th ACM Conference on Electronic Commerce, pages 176--185, 2003.

Cited By

View all
  • (2024)Optimal Data-Driven Hiring With Equity for Underrepresented GroupsProduction and Operations Management10.1177/10591478231224942Online publication date: 6-Feb-2024
  • (2024)Participatory Objective Design via Preference ElicitationProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658994(1637-1662)Online publication date: 3-Jun-2024
  • (2024)Group Fairness Refocused: Assessing the Social Impact of ML Systems2024 11th IEEE Swiss Conference on Data Science (SDS)10.1109/SDS60720.2024.00034(189-196)Online publication date: 30-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)326
  • Downloads (Last 6 weeks)25
Reflects downloads up to 21 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Optimal Data-Driven Hiring With Equity for Underrepresented GroupsProduction and Operations Management10.1177/10591478231224942Online publication date: 6-Feb-2024
  • (2024)Participatory Objective Design via Preference ElicitationProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658994(1637-1662)Online publication date: 3-Jun-2024
  • (2024)Group Fairness Refocused: Assessing the Social Impact of ML Systems2024 11th IEEE Swiss Conference on Data Science (SDS)10.1109/SDS60720.2024.00034(189-196)Online publication date: 30-May-2024
  • (2023)Proactive and reactive engagement of artificial intelligence methods for education: a reviewFrontiers in Artificial Intelligence10.3389/frai.2023.11513916Online publication date: 5-May-2023
  • (2023)Markovian Search with Socially Aware ConstraintsSSRN Electronic Journal10.2139/ssrn.4347447Online publication date: 2023
  • (2023)The Participatory Turn in AI Design: Theoretical Foundations and the Current State of PracticeProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3617694.3623261(1-23)Online publication date: 30-Oct-2023
  • (2023)Strategic EvaluationProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3617694.3623237(1-12)Online publication date: 30-Oct-2023
  • (2023)Fairness Implications of Encoding Protected Categorical AttributesProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604657(454-465)Online publication date: 8-Aug-2023
  • (2023)Envisioning Equitable Speech Technologies for Black Older AdultsProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594005(379-388)Online publication date: 12-Jun-2023
  • (2023)Optimization’s Neglected Normative CommitmentsProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3593976(50-63)Online publication date: 12-Jun-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media