Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3491102.3517672acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Designing Fair AI in Human Resource Management: Understanding Tensions Surrounding Algorithmic Evaluation and Envisioning Stakeholder-Centered Solutions

Published: 28 April 2022 Publication History

Abstract

Enterprises have recently adopted AI to human resource management (HRM) to evaluate employees’ work performance evaluation. However, in such an HRM context where multiple stakeholders are complexly intertwined with different incentives, it is problematic to design AI reflecting one stakeholder group's needs (e.g., enterprises, HR managers). Our research aims to investigate what tensions surrounding AI in HRM exist among stakeholders and explore design solutions to balance the tensions. By conducting stakeholder-centered participatory workshops with diverse stakeholders (including employees, employers/HR teams, and AI/business experts), we identified five major tensions: 1) divergent perspectives on fairness, 2) the accuracy of AI, 3) the transparency of the algorithm and its decision process, 4) the interpretability of algorithmic decisions, and 5) the trade-off between productivity and inhumanity. We present stakeholder-centered design ideas for solutions to mitigate these tensions and further discuss how to promote harmony among various stakeholders at the workplace.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 582:1-582:18. https://doi.org/10.1145/3173574.3174156
[2]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.
[3]
Saleema Amershi, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, Eric Horvitz, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, and Paul N. Bennett. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–13. https://doi.org/10.1145/3290605.3300233
[4]
Human Resources Professionals Association (HRPA). A new age of opportunities: what does artificial intelligence mean for HR professionals?. Retrieved April 20, 2020 from https://www.hrpa.ca/Documents/Public/Thought-Leadership/HRPA-Report-Artificial-Intelligence-20171031.PDF
[5]
Boris Babic, Sara Gerke, Theodoros Evgeniou, and I. Glenn Cohen. 2021. Beware explanations from AI in health care. Science 373, 6552: 284–286.
[6]
Yuki Ban, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2015. Improving work productivity by controlling the time rate displayed by the virtual clock. In Proceedings of the 6th Augmented Human International Conference, 25–32. https://doi.org/10.1145/2735711.2735791
[7]
Virginia Braun and Victoria Clarke. 2013. Successful qualitative research: A practical guide for beginners. sage.
[8]
Richard Buchanan. 1992. Wicked problems in design thinking. Design issues 8, 2: 5–21.
[9]
Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell. 2019. Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the conference on fairness, accountability, and transparency, 339–348.
[10]
Hao-Fei Cheng, Logan Stapleton, Ruiqi Wang, Paige Bullock, Alexandra Chouldechova, Zhiwei Steven Steven Wu, and Haiyi Zhu. 2021. Soliciting Stakeholders’ Fairness Notions in Child Maltreatment Predictive Systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–17.
[11]
C. West Churchman. 1967. Guest editorial: Wicked problems.
[12]
Houston Claure, Yifang Chen, Jignesh Modi, Malte Jung, and Stefanos Nikolaidis. 2020. Multi-Armed Bandits with Fairness Constraints for Distributing Resources to Human Teammates. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), 299–308. https://doi.org/10.1145/3319502.3374806
[13]
Bo Cowgill. 2018. Bias and productivity in humans and algorithms: Theory and evidence from resume screening. Columbia Business School, Columbia University 29.
[14]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies 2015, 1: 92–112.
[15]
Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–19.
[16]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3290605.3300724
[17]
Stefan Feuerriegel, Mateusz Dolata, and Gerhard Schwabe. 2020. Fair AI: Challenges and Opportunities. Business & Information Systems Engineering 62, 4: 379–384. https://doi.org/10.1007/s12599-020-00650-3
[18]
Alessandro Di Fiore and Marcio Souza. 2021. Are Peer Reviews the Future of Performance Evaluations? Harvard Business Review. Retrieved July 10, 2021 from https://hbr.org/2021/01/are-peer-reviews-the-future-of-performance-evaluations
[19]
Jodi Forlizzi. 2018. Moving beyond user-centered design. Interactions 25, 5: 22–23. https://doi.org/10.1145/3239558
[20]
Batya Friedman, Peter H. Kahn, Alan Borning, and Alina Huldtgren. 2013. Value sensitive design and information systems. In Early engagement and new technologies: Opening up the laboratory. Springer, 55–95.
[21]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P. Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NIPS symposium on machine learning and the law, 2.
[22]
Nigel Guenole. The Business Case for AI in HR. 36.
[23]
Kenneth Holstein, Bruce M. McLaren, and Vincent Aleven. 2019. Designing for complementarity: Teacher and student needs for orchestration support in ai-enhanced classrooms. In International Conference on Artificial Intelligence in Education, 157–171.
[24]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16.
[25]
Naja Holten Møller, Irina Shklovski, and Thomas T. Hildebrandt. 2020. Shifting Concepts of Value: Designing Algorithmic Decision-Support Systems for Public Services. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. Association for Computing Machinery, New York, NY, USA, 1–12. Retrieved November 23, 2021 from https://doi.org/10.1145/3419249.3420149
[26]
Andreas Holzinger. 2016. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics 3, 2: 119–131.
[27]
Jane Im, Jill Dimond, Melody Berton, Una Lee, Katherine Mustelier, Mark S. Ackerman, and Eric Gilbert. 2021. Yes: Affirmative Consent as a Theoretical Framework for Understanding and Imagining Social Platforms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18.
[28]
Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C. Ahn, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–14.
[29]
Qiong Jia, Yue Guo, Rong Li, Yurong Li, and Yuwei Chen. 2018. A conceptual artificial intelligence application framework in human resource management. In Proceedings of the International Conference on Electronic Business, 106–114.
[30]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.
[31]
Katherine C. Kellogg, Melissa A. Valentine, and Angele Christin. 2020. Algorithms at work: The new contested terrain of control. Academy of Management Annals 14, 1: 366–410.
[32]
Tae Wan Kim and Bryan R. Routledge. 2020. Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach. Business Ethics Quarterly: 1–28.
[33]
Young-Ho Kim, Eun Kyoung Choe, Bongshin Lee, and Jinwook Seo. 2019. Understanding Personal Productivity: How Knowledge Workers Define, Evaluate, and Reflect on Their Productivity. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300845
[34]
Young-Ho Kim, Jae Ho Jeon, Eun Kyoung Choe, Bongshin Lee, KwonHyun Kim, and Jinwook Seo. 2016. TimeAware: Leveraging framing effects to enhance personal productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 272–283.
[35]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–14. https://doi.org/10.1145/3290605.3300641
[36]
Robert E. Kraut, Susan T. Dumais, and Susan Koch. 1989. Computerization, productivity, and quality of work-life. Communications of the ACM 32, 2: 220–238. https://doi.org/10.1145/63342.63347
[37]
Matt J. Kusner and Joshua R. Loftus. 2020. The long road to fairer algorithms. Nature 578, 7793: 34–36. https://doi.org/10.1038/d41586-020-00274-3
[38]
Matt J. Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. Advances in neural information processing systems 30.
[39]
Lauren Kaori Gurley. 2021. Amazon Delivery Drivers Forced to Sign ‘Biometric Consent’ Form or Lose Job. Retrieved January 7, 2022 from https://www.vice.com/en/article/dy8n3j/amazon-delivery-drivers-forced-to-sign-biometric-consent-form-or-lose-job
[40]
Colin Lecher. 2019. How Amazon automatically tracks and fires warehouse workers for ‘productivity.’ The Verge. Retrieved February 5, 2020 from https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations
[41]
Michelle Seng Ah Lee and Jat Singh. 2021. The Landscape and Gaps in Open Source Fairness Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3411764.3445261
[42]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1: 205395171875668. https://doi.org/10.1177/2053951718756684
[43]
Min Kyung Lee and Su Baykal. 2017. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ’17, 1035–1048. https://doi.org/10.1145/2998181.2998230
[44]
Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–26.
[45]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, and Alexandros Psomas. 2019. WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–35.
[46]
Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1603–1612.
[47]
Weiwen Leung, Zheng Zhang, Daviti Jibuti, Jinhao Zhao, Maximilian Klein, Casey Pierce, Lionel Robert, and Haiyi Zhu. 2020. Race, Gender and Beauty: The Effect of Information Provision on Online Hiring Biases. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–11.
[48]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–15.
[49]
Jennifer Liu. 2019. A.I. is changing how much workers trust their managers—and that could be a good thing. CNBC. Retrieved February 5, 2020 from https://www.cnbc.com/2019/10/15/ai-is-changing-how-much-workers-trust-their-managerswhy-thats-good.html
[50]
Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 1–16. https://doi.org/10.1145/3313831.3376727
[51]
Louis Stone. 2021. Amazon tells drivers to sign biometric consent form, or lose their job. AI Business. Retrieved January 7, 2022 from https://www.aibusiness.com/document.asp?doc_id=768343
[52]
Caitlin Lustig, Katie Pine, Bonnie Nardi, Lilly Irani, Min Kyung Lee, Dawn Nafus, and Christian Sandvig. 2016. Algorithmic authority: the ethics, politics, and economics of algorithms that interpret, decide, and manage. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 1057–1062.
[53]
Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.
[54]
Nikolas Martelaro, Jaime Teevan, and Shamsi T. Iqbal. 2019. An Exploration of Speech-Based Productivity Support in the Car. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300494
[55]
Patty McCord. 2014. How netflix reinvented HR. Harvard Business Review 92, 1: 71–76.
[56]
Mike Teodorescu. Protected Attributes and “Fairness through Unawareness.” Retrieved March 21, 2022 from https://ocw.mit.edu/resources/res-ec-001-exploring-fairness-in-machine-learning-for-international-development-spring-2020/module-three-framework/protected-attributes/
[57]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267: 1–38.
[58]
Smitha Milli, Ludwig Schmidt, Anca D. Dragan, and Moritz Hardt. 2019. Model reconstruction from model explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 1–9.
[59]
Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11: 501–507.
[60]
Christoph Molnar. 2020. Interpretable Machine Learning. Lulu. com.
[61]
Hyanghee Park, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. 2021. Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15.
[62]
Sunyoung Park, Shinhee Jeong, and Boreum Ju. 2021. MOOCs in the workplace: an intervention for strategic human resource development. Human Resource Development International 24, 3: 329–340.
[63]
Valery Petrushin. 1999. Emotion in speech: Recognition and application to call centers. In Proceedings of artificial neural networks in engineering, 22.
[64]
Jeffrey Pfeffer. 1994. Competitive advantage through people. Boston/Mass.
[65]
Jeffrey Pfeffer. 1998. The human equation: Building profits by putting people first. Harvard Business Press.
[66]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481.
[67]
Sebastian Raisch and Sebastian Krakowski. 2021. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review 46, 1: 192–210.
[68]
Horst WJ Rittel and Melvin M. Webber. 1973. Dilemmas in a general theory of planning. Policy sciences 4, 2: 155–169.
[69]
Lionel P. Robert, Casey Pierce, Liz Marquis, Sangmi Kim, and Rasha Alahmad. 2020. Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Human–Computer Interaction: 1–31.
[70]
Nancy Roberts. 2000. Wicked problems and network approaches to resolution. International public management review 1, 1: 1–19.
[71]
Samantha Robertson, Tonya Nguyen, and Niloufar Salehi. 2021. Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–14.
[72]
Samantha Robertson and Niloufar Salehi. 2020. What If I Don't Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design. arXiv preprint arXiv:2007.06718.
[73]
Eric Rosenbaum. 2019. IBM artificial intelligence can predict with 95% accuracy which workers are about to quit their jobs. CNBC. Retrieved September 15, 2020 from https://www.cnbc.com/2019/04/03/ibm-ai-can-predict-with-95-percent-accuracy-which-employees-will-quit.html
[74]
Devansh Saxena, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. 2020. A Human-Centered Review of Algorithms used within the US Child Welfare System. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–15.
[75]
Devansh Saxena, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. 2021. A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2: 348:1-348:41. https://doi.org/10.1145/3476089
[76]
Tobias Schneider, Joana Hois, Alischa Rosenstein, Sabiha Ghellal, Dimitra Theofanou-Fülbier, and Ansgar R.S. Gerlicher. 2021. ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3411764.3446647
[77]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19), 59–68. https://doi.org/10.1145/3287560.3287598
[78]
Ben Shneiderman. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction 36, 6: 495–504.
[79]
C. Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren Terveen, and Haiyi Zhu. 2020. Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376783
[80]
Madeline Stone. Amazon employees reportedly slam each other through this internal review tool. Business Insider. Retrieved July 10, 2021 from https://www.businessinsider.com/amazon-employees-reportedly-slam-each-other-through-this-internal-review-tool-2015-8
[81]
Nicole Sultanum, Michael Brudno, Daniel Wigdor, and Fanny Chevalier. 2018. More text please! understanding and supporting the use of visualization for clinical text overview. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–13.
[82]
S. Shyam Sundar and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More Than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 538:1-538:9. https://doi.org/10.1145/3290605.3300768
[83]
Prasanna Tambe, Peter Cappelli, and Valery Yakubovich. 2019. Artificial Intelligence in Human Resources Management: Challenges and a Path Forward. California Management Review 61, 4: 15–42. https://doi.org/10.1177/0008125619867910
[84]
Mike HM Teodorescu, Lily Morse, Yazeed Awwad, and Gerald C. Kane. 2021. FAILURES OF FAIRNESS IN AUTOMATION REQUIRE A DEEPER UNDERSTANDING OF HUMAN-ML AUGMENTATION. MIS Quarterly 45, 3.
[85]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–14. https://doi.org/10.1145/3173574.3174014
[86]
James Vincent. 2021. Amazon delivery drivers have to consent to AI surveillance in their vans or lose their jobs. The Verge. Retrieved September 3, 2021 from https://www.theverge.com/2021/3/24/22347945/amazon-delivery-drivers-ai-surveillance-cameras-vans-consent-form
[87]
Demetris Vrontis, Michael Christofi, Vijay Pereira, Shlomo Tarba, Anna Makrides, and Eleni Trichina. 2021. Artificial intelligence, robotics, advanced technologies and human resource management: a systematic review. The International Journal of Human Resource Management: 1–30.
[88]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19, 1–15. https://doi.org/10.1145/3290605.3300831
[89]
Adrian Weller. 2017. Challenges for transparency. arXiv preprint arXiv:1708.01870.
[90]
Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and Auditing Fair Algorithms: A Case Study in Candidate Screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 666–677. https://doi.org/10.1145/3442188.3445928
[91]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 1–14. https://doi.org/10.1145/3173574.3174230
[92]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. 13.
[93]
Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11.
[94]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 279:1-279:12. https://doi.org/10.1145/3290605.3300509
[95]
Bowen Yu, Yuqing Ren, Loren Terveen, and Haiyi Zhu. 2017. Predicting Member Productivity and Withdrawal from Pre-Joining Attachments in Online Production Groups. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 1775–1784. https://doi.org/10.1145/2998181.2998227
[96]
Bowen Yu, Ye Yuan, Loren Terveen, Zhiwei Steven Wu, Jodi Forlizzi, and Haiyi Zhu. 2020. Keeping designers in the loop: Communicating inherent algorithmic trade-offs across multiple objectives. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1245–1257.
[97]
Bo Zhang and S. Shyam Sundar. 2019. Proactive vs. reactive personalization: Can customization of privacy enhance user experience? International Journal of Human-Computer Studies 128: 86–99. https://doi.org/10.1016/j.ijhcs.2019.03.002
[98]
Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction 2, CSCW: 1–23.
[99]
John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07), 493–502. https://doi.org/10.1145/1240624.1240704
[100]
2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved April 9, 2020 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
[101]
2020. Chinese construction firms using AI to keep tabs on workers. South China Morning Post. Retrieved August 15, 2020 from https://www.scmp.com/news/china/science/article/3091738/chinese-construction-firms-using-ai-monitor-workers-safety-also
[102]
2021. Growing pains of Korea's leading internet firms. koreatimes. Retrieved April 16, 2021 from http://www.koreatimes.co.kr/www/tech/2021/04/133_304787.html
[103]
Inside Amazon: Wrestling Big Ideas in a Bruising Workplace - The New York Times. Retrieved July 10, 2021 from https://www.nytimes.com/2015/08/16/technology/inside-amazon-wrestling-big-ideas-in-a-bruising-workplace.html
[104]
From Fear to Enthusiasm. Oracle & Future Workplace: 18.
[105]
Union Accuses Amazon of Breaking Federal Law by Firing Activist - Bloomberg. Retrieved February 5, 2020 from https://www.bloomberg.com/news/articles/2019-03-20/union-accuses-amazon-of-breaking-federal-law-by-firing-activist
[106]
Amazon's sexist AI recruiting tool: how did it go so wrong? Retrieved April 9, 2020 from https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e
[107]
Your Raise Is Now Based on Next Year's Performance - Bloomberg. Retrieved September 17, 2020 from https://www.bloomberg.com/news/articles/2018-07-09/your-raise-is-now-based-on-next-year-s-performance
[108]
Federal Laws Prohibiting Job Discrimination Questions And Answers | U.S. Equal Employment Opportunity Commission. Retrieved March 21, 2022 from https://www.eeoc.gov/fact-sheet/federal-laws-prohibiting-job-discrimination-questions-and-answers

Cited By

View all
  • (2025)Artificial intelligence for renewable energy strategies and techniquesComputer Vision and Machine Intelligence for Renewable Energy Systems10.1016/B978-0-443-28947-7.00002-1(17-39)Online publication date: 2025
  • (2024)Artificial Intelligence Educational Pedagogy DevelopmentEducational Perspectives on Digital Technologies in Modeling and Management10.4018/979-8-3693-2314-4.ch003(65-93)Online publication date: 7-Jan-2024
  • (2024)Definitions of Fairness Differ Across Socioeconomic Groups & Shape Perceptions of Algorithmic DecisionsProceedings of the ACM on Human-Computer Interaction10.1145/36870588:CSCW2(1-31)Online publication date: 8-Nov-2024
  • Show More Cited By

Index Terms

  1. Designing Fair AI in Human Resource Management: Understanding Tensions Surrounding Algorithmic Evaluation and Envisioning Stakeholder-Centered Solutions
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
        April 2022
        10459 pages
        ISBN:9781450391573
        DOI:10.1145/3491102
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 April 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Algorithmic management
        2. Artificial intelligence (AI)
        3. Explainable AI (XAI)
        4. Fair and responsible AI
        5. Future of work
        6. Human Intervention
        7. Human resource management
        8. Interpretability
        9. Stakeholder-centered design
        10. Transparency

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea

        Conference

        CHI '22
        Sponsor:
        CHI '22: CHI Conference on Human Factors in Computing Systems
        April 29 - May 5, 2022
        LA, New Orleans, USA

        Acceptance Rates

        Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

        Upcoming Conference

        CHI '25
        CHI Conference on Human Factors in Computing Systems
        April 26 - May 1, 2025
        Yokohama , Japan

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)578
        • Downloads (Last 6 weeks)70
        Reflects downloads up to 13 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Artificial intelligence for renewable energy strategies and techniquesComputer Vision and Machine Intelligence for Renewable Energy Systems10.1016/B978-0-443-28947-7.00002-1(17-39)Online publication date: 2025
        • (2024)Artificial Intelligence Educational Pedagogy DevelopmentEducational Perspectives on Digital Technologies in Modeling and Management10.4018/979-8-3693-2314-4.ch003(65-93)Online publication date: 7-Jan-2024
        • (2024)Definitions of Fairness Differ Across Socioeconomic Groups & Shape Perceptions of Algorithmic DecisionsProceedings of the ACM on Human-Computer Interaction10.1145/36870588:CSCW2(1-31)Online publication date: 8-Nov-2024
        • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
        • (2024)Teacher, Trainer, Counsel, Spy: How Generative AI can Bridge or Widen the Gaps in Worker-Centric Digital Phenotyping of WellbeingProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3663384.3663401(1-13)Online publication date: 25-Jun-2024
        • (2024)Lay User Involvement in Developing Human-centric Responsible AI Systems: When and How?ACM Journal on Responsible Computing10.1145/36525921:2(1-25)Online publication date: 20-Jun-2024
        • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
        • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
        • (2024)Spiritual AI: Exploring the Possibilities of a Human-AI Interaction Beyond Productive GoalsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650743(1-8)Online publication date: 11-May-2024
        • (2024)Re-examining User Burden in Human-AI Interaction: Focusing on a Domain-Specific ApproachExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3638186(1-4)Online publication date: 11-May-2024
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media