Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3351095.3372849acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

What does it mean to 'solve' the problem of discrimination in hiring?: social, technical and legal perspectives from the UK on automated hiring systems

Published: 27 January 2020 Publication History

Abstract

Discriminatory practices in recruitment and hiring are an ongoing issue that is a concern not just for workplace relations, but also for wider understandings of economic justice and inequality. The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Evidence of the extent of this uptake around the globe is scarce, but a recent report estimated that 98% of Fortune 500 companies use Applicant Tracking Systems of some kind in their hiring process, a trend driven by perceived efficiency measures and cost-savings. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of 'bias mitigation' are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective.
In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. These systems have been chosen as they explicitly claim to address issues of discrimination in hiring and, unlike many of their competitors, provide some information about how their systems work that can inform an analysis. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

References

[1]
2011. Equality Act 2010 Statutory Code of Practice Employment. Equality and Human Rights Commission. https://www.equalityhumanrights.com/en/publication-download/employment-statutory-code-practice
[2]
2016. Article 29 Working Party Guidelines on consent under Regulation 2016/679 WP259 rev.01.
[3]
2016. Charter of Fundamental Rights of the European Union. [2016] OJ C202/1., 389--405 pages. https://eur-lex.europa.eu/legal-content/EN/TXT/?toc=OJ%3AC%3A2016%3A202%3ATOC&uri=uriserv%3AOJ.C_.2016.202.01.0389.01.ENG
[4]
Ifeoma Ajunwa. 2018. The Rise of Platform Authoritarianism. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/rise-platform-authoritarianism
[5]
Ifeoma Ajunwa. 2020. The Paradox of Automation as Anti-Bias Intervention. 41 Cardozo, L. Rev. Forthcoming (2020). https://ssrn.com/abstract=2746078
[6]
Ifeoma Ajunwa, Kate Crawford, and Jason Schultz. 2016. Limitless Worker Surveillance. 105 Calif. L. Rev. 735 (2017) Rev. 735, 2017 (March 2016).
[7]
Ajunwa Ifeoma. 2019. Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work. In Work and Labor in the Digital Age, Greene Daniel, Steve P. Vallas, and Anne Kovalainen (Eds.). Research in the Sociology of Work, Vol. 33. Emerald Publishing Limited, 61--91.
[8]
AlgorithmWatch. 2019. Automating Society. Taking Stock of Automated Decision-Making in the EU. Technical Report. AlgorithmWatch in cooperation with Bertelsmann Stiftung. https://www.algorithmwatch.org/automating-society
[9]
Julia Angwin and Jeff Larson. 2016. Machine Bias. ProPublica (May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[10]
Julia Angwin and Noam Scheiber. 2017. Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads. ProPublica (Dec. 2017). https://www.propublica.org/article/facebook-ads-age-discrimination-targeting
[11]
Applied. 2019. Scaling fast: how to get hiring right. Technical Report. https://www.beapplied.com/whitepaper-signup
[12]
Jeremy B. Merrill Ariana Tobin. 2018. Facebook Is Letting Job Advertisers Target Only Men. ProPublica (Sept. 2018). https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men
[13]
Ananth Balashankar, Alyssa Lees, Chris Welty, and Lakshminarayanan Subramanian. 2019. Pareto-Efficient Fairness for Skewed Subgroup Data. In International Conference on Machine Learning AI for Social Good Workshop. Long Beach, United States, 8.
[14]
Barocas, Solon and Andrew D. Selbst. 2016. Big Data's Disparate Impact. 104 Calif. L. Rev. 671 (2016), 671--732.
[15]
Miranda Bogen and Rieke Aaron. 2018. Help Wanted: An Exploration of Hiring Algorithms, Equity and Bias. Technical Report. Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20-%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf
[16]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 77--91. http://proceedings.mlr.press/v81/buolamwini18a.html
[17]
Peter Cappelli. 2019. Data Science Can't Fix Hiring (Yet). Harvard Business Review (May 2019). https://hbr.org/2019/05/recruiting
[18]
Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--14.
[19]
Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. (July 2018). https://arxiv.org/abs/1808.00023
[20]
Lina Dencik, Fieke Jansen, and Philippa Metcalfe. 2018. A conceptual framework for approaching social justice in an age of datafication. https://datajusticeproject.net/2018/08/30/a-conceptual-framework-for-approaching-social-justice-in-an-age-of-datafication/
[21]
Lilian Edwards and Michael Veale. 2017. Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For. 16 Duke Law & Technology Review (May 2017), 18--84. https://scholarship.law.duke.edu/dltr/vol16/iss1/2
[22]
Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A Comparative Study of Fairness-enhancing Interventions in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). ACM, New York, NY, USA, 329--338. event-place: Atlanta, GA, USA.
[23]
Seeta Peña Gangadharan and Jędrzej Niklas. 2019. Decentering technology in discourse on discrimination. Information, Communication & Society 22, 7 (June 2019), 882--899.
[24]
Danielle Gaucher, Justin Friesen, and Aaron C. Kay. 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of Personality and Social Psychology 101, 1 (July 2011), 109--128.
[25]
Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine 38, 3 (Oct. 2017), 50.
[26]
Deborah Hellman. 2019. Measuring Algorithmic Fairness. Technical Report. https://papers.ssrn.com/abstract=3418528
[27]
HireVue. 2019. Bias, AI Ethics, and the HireVue Approach. https://www.hirevue.com/why-hirevue/ethical-ai
[28]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (June 2019), 900--915.
[29]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2564--2572. http://proceedings.mlr.press/v80/kearns18a.html
[30]
Jackie A. Lane and Rachel Ingleby. 2018. Indirect Discrimination, Justification and Proportionality: Are UK Claimants at a Disadvantage? Industrial Law Journal 47, 4 (Dec. 2018), 531--552.
[31]
Loren Larsen and Benjamin Taylor. 2017. Performance model adverse impact correction. https://patents.google.com/patent/US20170293858A1/en Patent No. US20170293858A1, Filed Sep. 27, 2016, Issued Oct. 12, 2017.
[32]
Colin Lecher. 2019. How Amazon automatically tracks and fires warehouse workers for 'productivity'. The Verge (April 2019). https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations
[33]
Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML's impact disparity require treatment disparity? In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Curran Associates, Inc., 8125--8135. http://papers.nips.cc/paper/8035-does-mitigating-mls-impact-disparity-require-treatment-disparity.pdf
[34]
Phoebe Moore and Andrew Robinson. 2016. The quantified self: What counts in the neoliberal workplace. New Media & Society 18, 11 (Dec. 2016), 2774--2792.
[35]
Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
[36]
Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, New York, NY, USA.
[37]
Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses. 2018. Questioning the assumptions behind fairness solutions. In Critiquing and Correcting Trends in Machine Learning. Montréal, Canada. http://arxiv.org/abs/1811.11293 arXiv: 1811.11293.
[38]
Seeta Peña Gangadharan, Virginia Eubanks, and Solon Barocas (Eds.). 2014. Data and Discrimination: Collected Essays. Open Technology Institute, New America. https://newamerica.org/documents/945/data-and-discrimination.pdf
[39]
Frida Polli and Julie Yoo. 2019. Systems and methods for data-driven identification of talent. https://patents.google.com/patent/US20190026681A1/en Patent No. US20190026681A1, Filed Jun. 20, 2018, Issued Jan. 24, 2019.
[40]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Vol. Accepted Papers. ACM. http://arxiv.org/abs/1906.09208 arXiv: 1906.09208.
[41]
Adler-Bell Sam and Miller Michelle. 2018. The Datafication of Employment. Technical Report. The Century Foundation. https://tcf.org/content/report/datafication-employment-surveillance-capitalism-shaping-workers-futures-without-knowledge/
[42]
Malcolm Sargeant. 2017. Discrimination and the Law (2nd edition ed.). Routledge, London.
[43]
Andrew D. Selbst, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19), Vol. Accepted Papers. ACM, Atlanta, GA, USA, 59--68.
[44]
Andrew D. Selbst and Julia Powles. 2017. Meaningful information and the right to explanation. International Data Privacy Law 7, 4 (Dec. 2017), 233--242.
[45]
Jon Shields. 2018. Over 98% of Fortune 500 Companies Use Applicant Tracking Systems (ATS). https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/
[46]
Haroon Siddique. 2019. Minority ethnic Britons face 'shocking' job discrimination. The Guardian (Jan. 2019). https://www.theguardian.com/world/2019/jan/17/minority-ethnic-britons-face-shocking-job-discrimination
[47]
Javier Sánchez-Monedero and Lina Dencik. 2018. How to (partially) evaluate automated decision systems. Technical Report. Cardiff University. 15 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2018/12/WP-How-to-evaluate-automated-decision-systems.pdf
[48]
Javier Sánchez-Monedero and Lina Dencik. 2019. The datafication of the workplace. Technical Report. Cardiff University. 48 pages. https://datajusticeproject.net/wp-content/uploads/sites/30/2019/05/Report-The-datafication-of-the-workplace.pdf
[49]
Benjamin Taylor and Loren Larsen. 2017. Model-driven evaluator bias detection. https://patents.google.com/patent/US9652745B2/en Patent No. US9652745B2, Filed Nov. 17, 2014, Issued May 16, 2017.
[50]
Kyla Thomas. 2018. The Labor Market Value of Taste: An Experimental Study of Class Bias in U.S. Employment. Sociological Science 5 (Sept. 2018), 562--595.
[51]
Uber Technologies Inc. 2019. Uber Privacy. https://privacy.uber.com/policy/
[52]
US EEOC. 1979. Adoption of Questions and Answers To Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. Text VOL. 44, NO. 43. The U.S. Equal Employment Opportunity Commission. https://www.eeoc.gov/policy/docs/qanda_clarify_procedures.html
[53]
Michael Veale and Lilian Edwards. 2018. Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review 34, 2 (April 2018), 398--404.
[54]
Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (FairWare '18). ACM, New York, NY, USA, 1--7. event-place: Gothenburg, Sweden.
[55]
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7, 2 (June 2017), 76--99.
[56]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2018).
[57]
Judy Wajcman. 2017. Automation: is it really different this time? The British Journal of Sociology 68, 1 (2017), 119--127.
[58]
Julie Yoo. 2017. Pymetrics with Dr. Julie Yoo. https://www.youtube.com/watch?v=9fF1FDLyEmM
[59]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. 2019. Fairness Constraints: A Flexible Approach for Fair Classification. Journal of Machine Learning Research 20, 75 (2019), 1--42. http://jmlr.org/papers/v20/18-262.html
[60]
Nora Zelevansky. 2019. The Big Business of Unconscious Bias. The New York Times (Nov. 2019). https://www.nytimes.com/2019/11/20/style/diversity-consultants.html

Cited By

View all
  • (2024)Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiringFrontiers in Psychology10.3389/fpsyg.2024.141650415Online publication date: 10-Sep-2024
  • (2024)The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐MakingSocial Inclusion10.17645/si.747112Online publication date: 18-Apr-2024
  • (2024)AI and discriminative decisions in recruitment: Challenging the core assumptionsBig Data & Society10.1177/2053951724123587211:1Online publication date: 30-Mar-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
January 2020
895 pages
ISBN:9781450369367
DOI:10.1145/3351095
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GDPR
  2. algorithmic decision-making
  3. automated hiring
  4. discrimination
  5. fairness
  6. social justice
  7. socio-technical systems

Qualifiers

  • Research-article

Funding Sources

  • European Research Council (ERC) Starting Grant

Conference

FAT* '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,401
  • Downloads (Last 6 weeks)275
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiringFrontiers in Psychology10.3389/fpsyg.2024.141650415Online publication date: 10-Sep-2024
  • (2024)The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision‐MakingSocial Inclusion10.17645/si.747112Online publication date: 18-Apr-2024
  • (2024)AI and discriminative decisions in recruitment: Challenging the core assumptionsBig Data & Society10.1177/2053951724123587211:1Online publication date: 30-Mar-2024
  • (2024)Goal Orientation for Fair Machine Learning AlgorithmsProduction and Operations Management10.1177/10591478241234998Online publication date: 18-Mar-2024
  • (2024)The Silicon Ceiling: Auditing GPT’s Race and Gender Biases in HiringProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694699(1-18)Online publication date: 29-Oct-2024
  • (2024)U.S. Job-Seekers' Organizational Justice Perceptions of Emotion AI-Enabled InterviewsProceedings of the ACM on Human-Computer Interaction10.1145/36869938:CSCW2(1-42)Online publication date: 8-Nov-2024
  • (2024)A Challenge-based Survey of E-recruitment Recommendation SystemsACM Computing Surveys10.1145/365994256:10(1-33)Online publication date: 22-Jun-2024
  • (2024)Fourth Workshop on Recommender Systems for Human Resources (RecSys in HR 2024)Proceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3687109(1222-1226)Online publication date: 8-Oct-2024
  • (2024)The four-fifths rule is not disparate impact: A woeful tale of epistemic trespassing in algorithmic fairnessProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658938(764-775)Online publication date: 3-Jun-2024
  • (2024)Algorithmic Reproductive JusticeProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658903(254-266)Online publication date: 3-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media