Nothing Special   »   [go: up one dir, main page]

6

Understanding United States anti-discrimination law

In this chapter, we hope to give you an appreciation of what United States anti-discrimination law is and isn’t. We’ll use the U.S. legal experience as a case study of how to regulate discrimination. Other countries take different approaches. We don’t aim to describe U.S. law comprehensively but rather give a stylized description of the key concepts.

We’ll start with a history of how the major civil rights statutes came to be, and draw lessons from this history that continue to be relevant today. Law represents one attempt to operationalize moral notions. It is an important and illustrative one. We will learn from the way in which the law navigates many tricky tradeoffs. But we will also study its limitations and explain why we think algorithmic fairness shouldn’t stop at legal compliance.

The final section addresses the specifics of regulating machine learning. Although U.S. antidiscrimination law predates the widespread use of machine learning, it is just as applicable if a decision maker uses machine learning or other statistical techniques. That said, machine learning introduces many complications to the application of these laws, and existing law may be inadequate to address some types of discrimination that arise when machine learning is involved. At the same time, we believe that there is also an opportunity to exercise new regulatory tools to rein in algorithmic discrimination.

This chapter can be skipped on a first reading of the book, but a few connections are worth pointing out. The first section elaborates on a central viewpoint of the book, especially Chapter 4, which is that attributes like race and gender are salient because they have historically served as organizing principles of many societies. That section also sets up Chapter 8 that conceives of discrimination more broadly than in discrete moments of decision making. The section on the limitations of the law motivates another core theme of this book, which is using the debates on machine learning and discrimination as an opportunity to revisit the moral foundations of fairness.

History and overview of U.S. anti-discrimination law

Every inch of civil rights protections built into law was fought and hard won through decades of activism. In this section, we briefly describe these histories of oppression and discrimination, the movements that arose in response to them, and the legal changes that they accomplished.

Black Civil Rights

The Black civil rights movement, often simply called the civil rights movement, has its roots in slavery in the United States and the rampant racial discrimination that persisted after its abolition. The period immediately following the American civil war and the abolition of slavery (roughly 1865-1877) is called the Reconstruction era. It resulted in substantial progress in civil rights. Notably, the Constitution was amended to abolish slavery (13th amendment), require equal protection under the laws (14th amendment), and guarantee voting rights regardless of race (15th amendment).

However, these gains were rapidly undone as White supremacists gained political control in the Southern states, ushering in the so-called Jim Crow era, a roughly 75-year period in which the state orchestrated stark racial segregation, discrimination, and near-total disenfranchisement of Black people. Nearly every facet of life was racially segregated, including residential neighborhoods, schools, workplaces, and places of public accommodation such as restaurants and hotels. This segregation was blessed by the Supreme Court in 1896, when it ruled that laws mandating segregation did not violate the Equal Protection clause under the “separate but equal” doctrine.“Plessy v. Ferguson” (Supreme Court, 1896). But in practice, things were far from equal. The jobs available to Black people usually paid far less, schools were underfunded and subject to closure, and accommodations were fewer and of inferior quality. As late as the 1950s, a cross-country drive by a Black person would have involved great peril, i.e., showing up at a small town at night and being refused a place to stay.Isabel Wilkerson, The Warmth of Other Suns: The Epic Story of America’s Great Migration (New York, NY: Penguin Random House, 2011). Black people could not democratically challenge these laws as the states erected numerous practical barriers to voting — ostensibly race neutral, but with vastly different effects by race — and Black people at the polls were often met with violence. As a result, disenfranchisement was highly effective. For example, in Louisiana until the mid 1940s, less than 1% of African Americans were registered to vote.Luke Keele, William Cubbison, and Ismail White, “Suppressing Black Votes: A Historical Case Study of Voting Restrictions in Louisiana,” American Political Science Review 115, no. 2 (2021): 694–700. (Data limitations preclude a nation-wide assessment of the effectiveness of disenfranchisement.)

Meanwhile, in the Northern states, racial discrimination operated in more indirect ways. Residential zoning laws that prohibited higher-density, lower-cost housing were used to keep poorer Black residents out of White neighborhoods. The practice of “redlining” by banks, orchestrated to some extent by federal regulators, limited the availability of credit, especially mortgages, in specific neighborhoods.Richard Rothstein, The Color of Law: A Forgotten History of How Our Government Segregated America (New York, NY: W. W. Norton & Company, 2018). The justification proffered was the level of risk, but it had the effect of discrimination against Black communities. Another prevalent technique to achieve segregation was the use of racially restrictive covenants in which property owners in a neighborhood entered into a contract not to sell or rent to non-White people.This practice continued until the Supreme Court struck it down in 1948 (Shelley v. Kraemer), arguing that even though these were private contracts, if the state were to enforce them, it would violate the constitution’s Equal Protection clause.

The civil rights movement emerged in the late 1800s and the early 1900s to confront these widespread practices of racism. Broadly, the movement adopted two complementary strategies: one was to challenge unjust laws and the other was to advance Black society within the constraints of segregation and discrimination. A key moment in the first prong was the formation of the National Association for the Advancement of Colored People in 1909. In addition to lobbying and litigation against Jim Crow laws, it sought to fight against lynching. Prominent efforts under the second prong included the Black entrepreneurship movement — 1900-1930 has been called the golden age of Black businessesJuliet E. K. Walker, History of Black Business in America: Capitalism, Race, Entrepreneurship, Evolution of Modern Business (New York, NY: Twayne Publishers, 1998). — and notable achievements in education. Many of the Historically Black Colleges and Universities were founded during the Jim Crow era.

After decades of activism, an epochal moment was a Supreme Court ruling in 1954 that declared the segregation of public schools unconstitutional. This began the gradual dismantling of the Jim Crow system, a process that would take decades and whose effects we still feel today. The court victories further galvanized the movement, leading to more intense activism and mass protests. This led to major federal legislation in the following decade: the Civil Rights Act of 1960 and the Voting Rights Act of 1965, both of which targeted voter suppression efforts, and the Civil Rights Act of 1964 and the Fair Housing Act of 1968 which targeted private discrimination. We will discuss the latter two in detail throughout this chapter.

Antidiscrimination laws were clearly a product of history and decades or centuries-long trends – slavery, Jim Crow, and the civil rights movement. At the same time, their proximate causes were often specific, unpredictable events. For example, the assassination of Martin Luther King Jr. provided the impetus for the passage of the Fair Housing Act. They also reflect political compromises that were necessary to secure their passage. For example, Title VII of the Civil Rights Act of 1964 created the Equal Employment Opportunity Commission; was stripped of the enforcement powers that had been present in the original wording of the title.Francis J Vaas, “Title VII: Legislative History,” BC Indus. & Com. L. Rev. 7 (1965): 431.

Gender Discrimination

The struggle for gender equality also has a long and storied history of activism. In the 1800s, the law did not recognize basic rights of women, including voting and owning property. Changing this was the primary goal of first-wave feminists whose strategies included advocacy, civil disobedience, lobbying, and legal action. The culminating moment was the ratification of the 19th amendment in 1920, guaranteeing women the right to vote (yet, as discussed above, Black women’s right to vote was still limited in the South). Second-wave feminism began in the 1960s. It targeted stereotypes about the role of women in society, private discrimination in education and employment, and bodily rights, including reproductive rights and domestic violence. In the early post-war years, gender norms regressed in some ways (e.g., women lost access to jobs that had been available to them because of the war) which was arguably an impetus for the movement.Betty Friedan, The Feminine Mystique (New York, NY: W. W. Norton & Company, 2001). Two early legislative victories were the Equal Pay Act of 1963 and Title VII of the Civil Rights Act of 1964 prohibiting employment discrimination.Although the latter was primarily a response to the Black civil rights movement, sex was added as a protected category in a last-minute amendment.

However, these did not initially have much impact due to the aforementioned lack of enforcement, and the movement only intensified. An important milestone was the founding of the National Organization of Women in 1965. Borrowing strategies from the Black civil rights movement, the second-wave feminists adopted a plan to litigate in the courts to secure protections for women. A notable court victory in the following decade was the expansion of abortion rights by the Supreme Court.“Roe v. Wade” (Supreme Court, 1973). On the legislative front, two major achievements for gender equality were Title IX of the Education Amendments Act of 1972 that prohibited sex discrimination in federally funded educational programs, and the Equal Credit Opportunity Act that prohibited sex discrimination in credit.

Education, especially higher education, and credit were both important sectors for women’s rights. Historically, many elite colleges simply did not accept women. Even in the 1970s, women faced many barriers in academia: sexual harassment, higher bars for admission, outright exclusion from some high-status fields such as law and medicine, and limited athletic opportunities. Similarly, credit discrimination in the 1970s was also stark, such as requiring women to reapply for credit upon marriage, usually in the husband’s name.“Senate Report 589 of the 94th Congress” (United States Senate, 1976). After this period, the focus of the feminist movement expanded beyond major legislative victories to include the questioning of gender as a social construct.

LGBTQ Civil Rights

Discriminatory laws against LGBTQ people were historically numerous: prohibition of some sexual behavior (i.e., anti-sodomy lawsWilliam N. Eskridge, Dishonorable Passions: Sodomy Laws in America, 1861-2003 (New York, NY: Viking, 2008).), lack of marriage rights, bans on military service and some other government positions, a failure to prohibit private discrimination and to treat hate crimes as such, and even a prohibition of literature advocating for gay rights under obscenity laws.

Tentative activism began in the 1950s with the first legal changes coming in the early 60s. A pivotal movement was the 1969 Stonewall riots, a series of protests in response to a police raid at a New York City gay bar. The aftermath of this event kickstarted the push for U.S. LGBTQ rights, including the gay pride movement for visibility and acceptance. In 1973, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders dropped homosexuality as a disorder, signaling (and furthering) a major shift in attitudes. The list of legal changes is long and ongoing. They include state-by-state changes to laws involving sodomy, marriage equality, private discrimination, and hate crimes; a 2003 Supreme Court decision ruling anti-sodomy laws unconstitutional;“Lawrence v. Texas” (Supreme Court, 2003). and a 2015 Supreme Court decision guaranteeing the right to marry for same-sex couples nationwide.“Obergefell v. Hodges” (Supreme Court, 2015). In parallel, the push for LGBTQ rights in the private sector has progressed in part by interpreting existing statutory prohibitions on sex discrimination, such as Title VII of the Civil Rights Act of 1964, to encompass sexual orientation and gender identity discrimination.“Bostock v. Clayton County, Georgia” (Supreme Court, 2020).

Disability laws

Another dimension of identity covered by anti-discrimination statutes is disability. Over a quarter of adults in the United States today have some type of disability, including mobility disabilities, blindness or other visual disability, deafness or other hearing disability, and cognitive disabilities.Catherine A. Okoro et al., Prevalence of Disabilities and Health Care Access by Disability Status and Type Among Adults — United States, 2016,” Morbidity and Mortality Weekly Report 67, no. 32 (2018): 882–87, https://doi.org/10.15585/mmwr.mm6732a3. These and other disabilities are distinct identities corresponding to different lived experiences and, sometimes, cultures.Notably Deaf culture; for an introduction see (Carol Padden and Tom Humphries, Inside Deaf Culture (Cambridge, MA: Harvard University Press, 2006)) Still, the emergence of a cross-disability coalition and identity enabled more effective advocacy for disability rights. This movement gained steam in the decades following World War II. Activists aimed to make disability visible, rather than stigmatized, pitied, and hidden, and sought to achieve independent living. Like other rights movements, disabled people faced multiple, mutually-reinforcing barriers: society’s attitudes towards disability and disabled people, the lack of physical accommodations and assistive technologies, and discriminatory policies.Doris Zames Fleischer and Frieda Zames, The Disability Rights Movement: From Charity to Confrontation (JSTOR, 2013). Attitudes that held back disabled people weren’t just prejudice, but also mistaken views of disability as residing in the person (the medical model) instead of, or in addition to, being created by barriers in society (the social model). The first federal law protecting disability rights was the Rehabilitation Act of 1973 which prohibited disability discrimination in federally funded programs. Activism toward a broad civil rights statute continued, with the 1964 Civil Rights Act as a model. These efforts culminated in the Americans with Disabilities Act (ADA) of 1990. While the ADA has many similarities to the other civil rights statutes, it also has major differences due to its emphasis on accommodation in addition to formal nondiscrimination.

Lessons

The histories of the various civil rights movements hold several lessons that continue to be relevant today. First of all, the law is a political instrument: it can be used to discriminate, to create the conditions under which discrimination can flourish, or to challenge discrimnation. It can be a tool for subjugation or liberation. Laws may be facially neutral but they are created, interpreted, and enforced by actors that respond to the changing times and to activism. Court decisions are also influenced by contemporary activism and even scholarship.

Our brief historical discussion also helps explain why certain sectors are regulated, and not others. Education, employment, housing, credit, and public accommodation are domains that are both highly salient to people’s life courses and have had histories of discrimination that were deliberately used to subordinate some groups.In addition, there are constitutional limitations on the ability of Congress to regulate private discrimination.

One consequence of this sector-specific approach is that the law can be tailored to the particularities of the sector in an attempt to avoid loopholes. For example, the Fair Housing Act encompasses the full range of practices related to housing including sales, rentals, advertising, and financing. It lists (and prohibits) various ways in which housing agents may subtly mislead or discourage clients belonging to protected classes. Recognizing the importance of financing for securing housing, it prohibits discrimination in financing with respect to “purchase, construction, improvement, repair, or maintenance”. It even prohibits ads indicating a discriminatory preference. And that includes not just categorical statements such as “no children”, but also targeting of ads to certain geographic regions in a way that correlates with race, and the selection of actors used in advertising.

In many cases these attempts to avoid loopholes have held up well in the face of recent technological developments. The prohibition on discriminatory advertising has forced online ad platforms to avoid discriminatory targeting of housing ads.“Facebook, Inc.” (Department of Housing; Urban Development, 2019). But this is not always so. Ride hailing platforms are able to evade Title VII (employment discrimination) liability even though they terminate drivers based on the (potentially discriminatory) ratings given by passengers.Alex Rosenblat et al., “Discriminating Tastes: Customer Ratings as Vehicles for Bias,” Data & Society, 2016, 1–21.

Even though laws are sector-specific, it is hard to understand discrimination by looking at any one set of institutions (such as employment or education, much less a single organization) in isolation. History shows us that there tend to be multiple interlocking systems of oppression operating in tandem, such as federal housing policy and private-sector discrimination. Similarly, the line between state and private discrimination is not always clear.

History also shows that when disrupted, hierarchies tend to reassert themselves by other means. For example, the end of de-jure segregation accelerated the phenomenon of “White flight” from cities to the suburbs, exacerbating de-facto segregation. Not only is progress fitful, regression is possible. For example, Woodrow Wilson and his administration segregated large parts of the federal workforce in the 1910s, eroding some of the gains Black people had made in previous decades. And as we were writing this chapter, the Supreme Court reversed Roe v. Wade, ending federal protection of abortion rights and enabling severe restrictions on abortion in many states.

Another important point that is not apparent from the laws themselves is that the various protected dimensions of identity have complex and distinct histories of discrimination and activism, even if statutes attempt to treat them all in a uniform and formal way. Even within a single dimension like ethnicity, the oppression and struggles of different groups take drastically different forms. Native Americans endured a century of attempts at forced assimilation in which children were sent to boarding schools and asked to abandon their culture. The Chinese Exclusion Act of 1882 all but eliminated the immigration of Chinese people for over half a century and made conditions inhospitable for the Chinese immigrant community that already existed. During World War II, over 100,000 people of Japanese ancestry, the majority of whom were U.S. citizens, were interned in concentration camps under the pretense that they were disloyal to the country. These are just a few of the more gruesome episodes of discrimination on the basis of race, ethnicity, and national origin in U.S. history, focusing on the actions of the government. National origin discrimination was often a thinly veiled form of racial discrimination. Thus, although the list of protected attributes in the law may grow over time, it is not arbitrary and is deeply informed by history.In the context of equal protection doctrine, the Supreme Court has explicitly listed the criteria that qualify a trait for protection (“heightened scrutiny”): a history of past discrimination, political powerlessness, the irrelevancy of a trait to an individual’s ability to contribute to or participate in society, and immutability. (Lauren Sudeall Lucas, Identity as a Proxy,” Columbia Law Review 115, no. 6 (2015): 1605–74)

Equality under the law remains a contested and evolving notion. This is especially the case when antidiscrimination runs up against some countervailing value or principle, such as religious freedom or limiting state authority. And because the law is intertwined with our lives and livelihoods in so many ways, equality under the law, in a broad sense, requires far more than formal nondiscrimination. Consider gender equality. The range of legal interventions necessary to achieve it is long and growing. Beyond voting rights and prohibition of sex and gender discrimination, it includes prohibition of pregnancy and marital status discrimination, curbing sexual harassment and sexual violence, abortion rights, maternity leave laws, and childcare subsidies. Each one of these battles has many fronts. For example, the #MeToo movement brought to light the role of non-disparagement clauses by employers in settlements to silence victims of workplace sexual harassment, and there is an ongoing effort to prohibit such clauses.

Finally, legal change is not the end of the road but in some ways the beginning. The effects of past discrimination tend to leave an lasting imprint. The law itself, given political realities, can only do so much to erase the effects of that history.

A summary of the major anti-discrimination statutes: Titles VI and VII of the Civil Rights act of 1964, the Fair Housing Act, Title IX of the Education Amendments Act of 1972, the Equal Credit Opportunity Act, and the Americans with Disabilities Act.
Law Year Covered entities and regulated activities Protected categories (* = added later)
Title VII 1964 Employers, employment agencies, labor unions Race, color, religion, sex, national origin, pregnancy*
Title VI 1964 Any organization receiving federal funding (due to breadth, doesn’t list regulated activities) Race, color, national origin
FHA 1968 sales, rentals, advertising, and financing of housing Race, color, religion, national origin, sex(*?), handicap, familial status.
Title IX 1972 Educational programs receiving federal funding: hiring, pay, rank, sexual harassment, retaliation, segregation & same-sex education Sex
ECOA 1974 Creditors (Banks, small loan and finance companies, retail and department stores, credit card companies, and credit unions.) Race*, sex, age*, national origin*, marital status, receipt of public assistance*
ADA 1990 Employers, public services, public accommodation Disability; record of disability; perception of disability

How the law conceives of discrimination

There are many possible ways to define discrimination and attempt to achieve nondiscrimination. In this section, we will discuss how the law conceives of discrimination and how it tries to balance nondiscrimination with other ideals.

Disparate treatment and disparate impact

Imagine an employer turning down a job candidate and explicitly informing them that this decision was on account of a protected characteristic. Such a case would be relatively straightforward to adjudicate based on the text of the statute itself (“It shall be an unlawful employment practice for an employer to fail or refuse to hire an individual… because of such individual’s race, color, religion, sex, or national origin.”) However, in most cases of discrimination, the decision maker’s behavior is less explicit and the evidence is more circumstantial. To deal with these, courts have created two main doctrines called disparate treatment and disparate impact.

Disparate treatment refers to intentional discrimination and roughly matches the average person’s conception of discriminatory behavior. It subsumes the straightforward case described in the previous paragraph. For more circumstantial cases, the Supreme Court has established a so-called burden shifting framework under Title VII (employment law). First, the plaintiff must establish a “prima facie” case of discrimination by showing that they are a member of a protected class, was qualified for a position, was denied it, and the position then remained open or was given to someone not in the protected class. If the plaintiff is successful at this, the employer must produce a legitimate, non-discriminatory reason for the adverse decision. The plaintiff then has the burden of proving that the proffered reason is merely a pretext for discrimination.More precisely, this is only one of the possible frameworks; we describe it for illustrative purposes. The full picture—including the game of which framework to choose—is a vast morass. (Martin J Katz, “Unifying Disparate Treatment (Really),” Hastings Law Journal 59, no. 3 (2008): 643)

Disparate treatment usually involves reasoning about what action the defendant would have taken if the plaintiff’s protected characteristic had been different, with all other facts of the case unchanged. Elsewhere in this book we argue why, from a technical perspective, these “attribute-flipping” counterfactuals are at odds with a nuanced understanding of causality and result in brittle tests for discrimination. In any event, the importance of causality in disparate treatment, especially so-called but-for causation, has increased following a 2020 Supreme Court decision“Bostock v. Clayton County, Georgia.” which held that it is impossible to engage in sexual orientation discrimination without engaging in sex discrimination by imagining a counterfactual in which the victim’s sex is changed without affecting anything else, including gender preference. While celebrated from a civil rights perspective because of its implications for LGBTQ rights, we should keep in mind that this represents a narrow understanding of causality and its application in other scenarios may yield conclusions not so favorable to civil rights.

In contrast to disparate treatment, disparate impact is about practices that have a disproportionate effect on a protected class, even if unintentional. At a high level, disparate impact must be both unjustified and avoidable. This is again operationalized through a burden-shifting framework. First, the plaintiff must establish that there is a disproportionate difference in selection rates between different groups. If that can be shown, then the employer has the opportunity to explain if the reason for the different selection rates has a business justification. The burden then reverts to the plaintiff to show that there is an “alternative employment practice” that would have achieved the employer’s aims while being less discriminatory.

One way to think about disparate impact is as a way to “sniff out” well-concealed intentional discrimination by putting the focus on its impacts, which are more readily observable. Indeed, the case that led to the doctrine involved an employer that introduced aptitude tests for promotion on the very day that the Civil Rights Act of 1964, that prohibited employment discrimination based on race, took effect.Robert N. Price, Griggs v. Duke Power Co.: The first landmark under Title VII of the Civil Rights Act of 1964,” Southwestern Law Journal 25, no. 3 (2016): 484–93.

But disparate impact is also thought to be motivated by a consideration of distributive justice, that is, minimizing unjustified inequalities in outcomes. In this sense, disparate impact roughly corresponds to the middle view of equality of opportunity that we discussed in Chapter 4. Disparate impact tries to force decision-makers to treat seemingly dissimilar people similarly on the belief that their current dissimilarity is a result of past injustice. It aims to compensate for at least some of the disadvantagesuffered for unjust reasons. Indeed, in the case mentioned above, the Supreme Court pointed out that the racial performance disparity on aptitude tests could be explained by inequalities in the educational system. But disparate impact doctrine has evolved over the years and the extent to which it reflects distributive justice, as opposed to a device for illuminating well-concealed discrimination, is thought to have waned over time.

While we have discussed these two doctrines in the context of employment law, they are found in each of the six domains we discussed in the first section. Disparate impact has been so central to the legal understanding of discrimination that it was later incorporated into statutes, notably the ADA (disability law), but also into Title VII (equal employment law) itself through a 1991 amendment. But the Supreme Court has not extended the doctrine to situations where the laws or procedures of the state (rather than private actors) violate the Equal Protection Clause if they have a discriminatory purpose. In other words, there is no equivalent of disparate impact doctrine for state actors, only disparate treatment.

An important general observation about antidiscrimination law—especially for readers who may be accustomed to thinking about fairness in terms of the statistical properties of the outputs of decision making processes—is that the law is primarily concerned with the processes themselves. Relatedly, the way that courts go about weighing evidence is also highly procedural, to the point where it may seem tangential to the substantive question of whether discrimination took place.Katie Eyer, “The but-for Theory of Anti-Discrimination Law,” Va. L. Rev. 107 (2021): 1621. Even disparate impact, despite being motivated in part by distributive notions of justice, is treated in a formal and procedural manner. As an illustration of the centrality of the procedural element, the DOJ’s legal manual for proving Title VI disparate impact claims is over 20,000 words long.Title VI Legal Manual (Washington, DC: Civil Rights Division U.S. Department of Justice, April 2021), https://www.justice.gov/crt/book/file/1364106/download.

There are many possible reasons for the law’s focus on process. One is historical: the statutes were primarily responding to blunt discrimination and formal denials of opportunity as opposed to more subtle statistical phenomena. It is also a better match for how the law works: the definition of discrimination cannot be divorced from the procedure for proving it in court. A third reason is political: it is easier to achieve consensus about fair processes than what the right distributive outcome is. Finally, at a pragmatic level, it reflects the law’s attention to the nuances of workplaces and other institutions, compared to which statistical fairness criteria seem crude and oversimplified.

Avoiding excessive burdens on decision makers

A recurring theme is how much burden on decision makers is justified in pursuit of fairness goals. For example, making accommodations for disabled employees results in some cost to a firm.

In general, the law gives substantial deference to the interests of the decision maker. This has been repeatedly made clear by lawmakers and the courts at various points in time. For example, the House Judiciary Committee said on the role of the EEOC at the moment of the agency’s inception: “management prerogatives, and union freedoms are to be left undisturbed to the greatest extent possible.”House of Representatives Report No. 914 Pt 2, 88th Congress, 1st Session,” 1963. The Supreme Court clarified in 2015 that “The FHA (Fair Housing Act) is not an instrument to force housing authorities to reorder their priorities.”“Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc.” (Supreme Court, 2015).

One exception is the Americans with Disabilities Act which imposes substantial compliance requirements on a large set of firms and governments. This shouldn’t be a surprise since the law sought to create structural changes in society, especially to the built environment. The ADA does have an “undue hardship” defense to the requirement that employers provide “reasonable accommodations” to qualified employees with disabilities, but courts appear to tolerate a higher burden than under, say, Title VII.The ADA originally called for a higher bar: an accommodation would have to threaten the continued existence of the employer’s business. (Nicole Buonocore Porter, “A New Look at the ADA’s Undue Hardship Defense,” Mo. L. Rev. 84 (2019): 121) In fact, the part of the law that applies to public services does have a higher bar for the defendant: the accommodation must fundamentally alter the nature of the service or program. As an illustration, blind employees sued their employers for failing to provide paid readers for four hours of the workday; the court sided with the plaintiffs, implying that a roughly 50% increase in the cost of the employees to the employer did not constitute an undue hardship.Porter. We hasten to add that undue hardship is a multi-factor test and there is no clear or uniform cost threshold; cost is rarely the determinative factor.

There are a range of potential justifications of the burdens on decision makers in the academic scholarship and legislative history. Often the responsibilities of decision makers are justified by appeal to the human rights of those being harmed, rather than an economic analysis. For example, the ADA sought to intervene in discrimination against disabled people that often affected their livelihoods and sometimes cost lives.Porter. Alternatively, burdens are sometimes justified because they pose only a “de minimis” cost. For example, Title VII requires employers to make accommodations for employees’ religious beliefs, but not if it would pose more than a minimal cost.

In between these two types of cases lie a variety of others where a more careful balance between benefits and costs is necessary.These types of questions are studied in the field of law and economics, which applies microeconomic theory to explain the effects of laws. We give a few brief examples here.

  • Positive externality. A hope behind the Americans with Disabilities Act was that it would make it easier for disabled people to enter the workforce and contribute to the overall economy.In fact, the fraction of disabled individuals employed has decreased since the passage of the ADA, although the causal effect is far from clear.
  • Regulation as collective action. Title II of the Civil Rights Act of 1964 prohibits discrimination in places of public accommodation (e.g., restaurants). A major reason why such establishments discriminated against minorities is due to the prejudices of their White customers. Title II enabled them to stop discriminating, gaining business from minority customers without incurring lost business from their White customers ; thus, the law did not impose a burden on them but rather created an opportunity.Louis Menand, “The Changing Meaning of Affirmative Action,” The New Yorker, January 2020, https://www.newyorker.com/magazine/2020/01/20/have-we-outgrown-the-need-for-affirmative-action. Similarly, consider insurance. In the absence of regulation, if an insurer avoids calibrating premiums to risk in the interest of fairness, it might go out of business. But if all firms in the market have their behavior constrained by antidiscrimination law in the same way, they can no longer claim to be at a competitive disadvantage.
  • Cheapest cost avoider. The cheapest cost avoider or least cost avoider principle assigns liability from a harm to the party that can avoid the harm at the lowest cost. It is the reason why firms, to some extent, bear liability for discrimination or harassment committed by their employees. If an employer is forced to internalize the costs of discriminatory harassment committed by its employees, it will, on standard economic theory, invest in precautions up to the point where they are no longer cost justified.Samuel R. Bagenstos, “Formalism and Employer Liability Under Title VII,” University of Chicago Legal Forum 2014, no. 1 (2014): 145–76.
  • Correcting irrationality. Some commentators suggest that the rampant discrimination against women before the passage of the ECOA was irrational behavior by creditors, and women were in fact good credit risks.John W. Cairns, “Credit Equality Comes to Women: An Analysis of the Equal Credit Opportunity Act,” San Diego Law Review 13, no. 4 (1976): 960–77. In this view, ECOA can be seen as correcting this irrationality rather than imposing a burden on creditors.

Limits of the law in curbing discrimination

How effective has United States antidiscrimination law been? The best-case scenario is that the possibility of penalties has sufficiently deterred would-be discriminators that rates of discrimination have plummeted, and, in the few remaining cases of discrimination, victims manage to obtain redress through the courts. The worst-case scenario is that the laws have had virtually no effect, and any reductions in disparities since their passage can be attributed to other factors such as discriminators being less successful in the market.

The reality is somewhere in between. Rigorously evaluating the effect of laws is a tricky counterfactual problem and is subject to much uncertainty and debate. However, there is much evidence suggestive of a positive effect. For example, one study used a natural experiment to evaluate the impact of Title VII on job opportunities for African Americans relative to White Americans. It showed that the relative employment of African Americans increased more in industries and regions with a greater proportion of firms that were newly covered under Title VII by the Equal Employment Opportunity Act of 1972.Kenneth Y. Chay, The Impact of Federal Civil Rights Policy on Black Economic Progress: Evidence from the Equal Employment Opportunity Act of 1972,” Industrial & Labor Relations Review 51, no. 4 (1998): 608–32, https://doi.org/10.1177/001979399805100404; Dau-Schmidt and Sherman, The Employment and Economic Advancement of African-Americans in the Twentieth Century.”

While the gains have been non-negligible, the effectiveness of anti-discrimination law is blunted for many reasons which we now discuss. This motivates our view that work on algorithmic fairness should not treat the approach adopted in antidiscrimination law as a given , but should instead reconnect with the moral foundations of fairness.

Burdens on victims of discrimination

The law places an array of burdens on victims of discrimination if they wish to seek legal recourse. We will use the labor market as our running example, but our observations apply to other contexts as well.

To begin with, legal intervention is initiated by the victim, not the government,There are limited exceptions to this general principle, such as the ability of enforcement agencies to bring “pattern-or-practice” cases against repeat discriminators. A prominent example is the Department of Justice settlement against the Pennsylvania Police Deptartment. U.S. v. Pennsylvania & Pennsylvania State Police, no.1:14-cv-01474-SHR (M.D. Pa. July 29, 2014), https://www.justice.gov/sites/default/files/crt/legacy/2014/07/31/pennsylvaniapdcomp.pdf. and cannot begin until after victims have already suffered discrimination. Regulators do not prospectively review employment practices, in contrast to other areas of law such as pharmaceutical regulation where drugs must be thoroughly tested before being allowed on the market. Further, there is a fundamental information asymmetry between firms and employees (or job candidates). Victims may not even be aware that they have faced discrimination. After all, job candidates and employees have no direct visibility into employers’ decision making process and firms need not provide a justification for an adverse hiring or promotion decision. Only in the domain of credit does some form of transparency requirement exist.However, the turn to algorithmic tools in hiring has opened the possibility of transparency requirements and ex-ante review. A coalition of civil rights organizations have advocated for such practices in a document laying out a set of civil rights principles for hiring assessment technologies. (“Civil Rights Principles for Hiring Assessment Technologies” (The Leadership Conference Education Fund, 2020), https://civilrightsdocs.info/pdf/policy/letters/2020/Hiring_Principles_FINAL_7.29.20.pdf) We discuss emerging ideas such as Algorithmic Impact Assessments in the final section of this chapter.

Even if a victim becomes aware of discrimination, they face barriers that may deter them from suing. Litigation involves additional mental anguish. Victims may also be deterred by the high financial costs of litigation.This can be mitigated in some cases if law firms are willing to be paid on a contingency fee basis—wherein they are paid only in the event of a favorable result as a fixed percentage of the damages recovered—or if the statute provides for collective or class actions. Lawsuits typically take several years to reach a conclusion, by which time the victim’s career may suffer a significant and irreparable setback. If the victim remains at the firm after filing suit, they face an uncomfortable situation in the best case, and potentially retaliation from the employer (even though laws specifically prohibit retaliation, it remains a common result of discrimination lawsuits). And if the victim seeks employment elsewhere, future employers may weigh negatively the fact that the candidate sued a previous employer for discrimination.

Victims who decide to sue face a battery of procedural hurdles. If the employer has internal grievance procedures, the victim may be required to try those before suing (or risk losing her claims). Another prerequisite to filing suit is to file an administrative complaint with the Equal Employment Opportunity Commission promptly after the discrimination starts. The timeliness requirement often puts victims in a double bind because of the need to exhaust internal channels. It also makes it difficult to collect the evidence necessary to prevail in court.Scott A. Moss, “Fighting Discrimination While Fighting Litigation: A Tale of Two Supreme Courts,” Fordham Law Review 76, no. 2 (2007): 981–1013; Sandra F. Sperino and Suja A. Thomas, Unequal: How America’s Courts Undermine Discrimination Law (New York, NY: Oxford University Press;, 2017).

That brings us to the final and most serious difficulty that plaintiffs face, which is the burden of proof. To be sure, the standard of proof that the plaintiff must meet in discrimination cases is “preponderance of the evidence”, which means more likely than not, which is lower than the standard in criminal cases. But even this standard has proved daunting. According to Katie Eyer, “anti-discrimination law is a highly rigid technical area of the law, in which any of a myriad of technical doctrines can lead to dismissal. Courts approach the question of discrimination as if it were a complex legal puzzle, in which any piece out of place must result in the dismissal of the plaintiffs’ claims.”Eyer, “The but-for Theory of Anti-Discrimination Law.”

Specifically, in disparate treatment cases, courts have created numerous defendant-friendly doctrines. Under the “stray remarks” doctrine, discriminatory comments made by the employer about the plaintiff do not constitute evidence of discriminatory intent unless there is a sufficiently clear causal nexus to the decision itself. Under the “same actor” defense, if the employer was willing to hire the plaintiff at a previous time, it is taken as evidence that the employer bears no discriminatory intent against the plaintiff. Under the “honest belief” rule, a case can be summarily dismissed if the employer “honestly believed” in the reasons for the decision, even if they can later be shown to be “mistaken, foolish, trivial, or baseless.”

In disparate impact, an overlapping set of factors is arrayed against the plaintiff. While there is no need to establish intent, there is a new set of requirements: identifying a specific policy or practice that caused the adverse employment decision; compiling the requisite statistics to show that the policy has a disparate impact; and rebutting the employer’s defense that the policy is justified by job-relatedness. The third prong is a particularly severe hurdle for plaintiffs as they are structurally poorly positioned to identify an alternative employment practice, lacking the knowledge of the internals of the business that the employer does.Ifeoma Ajunwa, “The Paradox of Automation as Anti-Bias Intervention,” Cardozo Law Review 41, no. 5 (2020): 1671–1742; Porter, “A New Look at the ADA’s Undue Hardship Defense.”

The net result of these barriers to plaintiffs is that their odds of success at trial are exceedingly low. Katie Eyer summarizes data from the Uncertain Justice project: “of every 100 discrimination plaintiffs who litigate their claims to conclusion (i.e., do not settle or voluntarily dismiss their claims), only 4 achieve any form (de minimis or not) of relief. … These odds can properly be characterized as shockingly bad, and extend (with minor differences) to every category of discrimination plaintiff, including race, sex, age, and disability.”Eyer, “The but-for Theory of Anti-Discrimination Law.”

We should note that there is a widespread view that employment discrimination lawsuits are too easy to file and too favorable to plaintiffs, a position we reject. Selmi critically examines this perception and notes that it is prevalent among judges; correcting this perceived imbalance may in fact be one reason for the creation of numerous hurdles for plaintiffs.Michael Selmi, Why are Employment Discrimination Cases So Hard to Win? Louisiana Law Review 61, no. 3 (2001): 555–75. Whether or not one subscribes to the view that many “nuisance lawsuits” are filed by plaintiffs alleging discrimination, it is true that courts are highly strained and judges are wary of decisions that might open the “ floodgates ” to lawsuits. This suggests that the burdens we have discussed above are unlikely to go away.Katie R. Eyer, “That’s Not Discrimination: American Beliefs and the Limits of Anti-Discrimination Law,” Minnesota Law Review 96 (2012): 1275–1362, https://doi.org/10.1093/acprof:oso/9780199698462.003.0001; Sperino and Thomas, Unequal: How America’s Courts Undermine Discrimination Law.

The difficulty of substantive and structural reform through procedural intervention

Even if compliance with antidiscrimination law is high, and legal remedies are readily attained, there may be even more fundamental limits to the effectiveness of the law. To what extent do the formal limits that the law imposes on individuals and organizations lead to a just society? How big is the gap between legal and moral notions of unfairness?

Stephen Halpern frames the issue thus:Stephen C. Halpern, On the Limits of the Law: The Ironic Legacy of Title VI of the 1964 Civil Rights Act (Baltimore, MD: Johns Hopkins University Press, 1995).

In translating a social problem into the “language” of the law, lawyers must frame their analysis in terms of contrived concepts, issues, questions, and remedies that the legal system recognizes and deems legitimate. In that translation, as in any translation, there are constrictions and distortions. Framing a social problem as a legal issue produces a transformation of the issue itself—a reconceptualization of the problem, yielding unique questions and concerns that first become the focus of the legal debate and subsequently tend to dominate public discussion. When racial problems are reformulated as questions of legal rights, the resulting dialogue does not capture the complexity and subtlety of those problems or permit consideration of the fullest range of remedies for them. Inevitably, the demands and limits of the legal process alter the public discourse about and understanding of vital racial issues.

Halpern’s book is about racial inequality in education; his main example of his thesis is the effort that was put into ending segregation of public schools without much attention paid to the quality of education received by Black students in integrated schools. Similarly, Title VI litigation focused on procedures for processing complaints of discrimination filed with the federal government, rather than mechanisms to vindicate substantive rights to an education of comparable quality. He argues that “[f]ew, if any, of the factors that have an impact on educational achievement are governed by “legal rights” or are readily translatable into an issue of “racial discrimination”.” He gives two reasons why inequalities persist despite the law’s formal remedies: the de-facto segregation of American cities and the fact that academic differences often arise from instability in the home and other social, economic, and health disparities.

While the effect of school desegregation in the United States is a vast topic, the broader point is that limitations of the legal process restrict what is achievable—and even shape our understanding of the issues themselves. Another example of this comes from Richard Rothstein’s book Color of Law:Rothstein, The Color of Law.

Although most African Americans have suffered under [historically racist government housing policies], they cannot identify, with the specificity a court case requires, the particular point at which they were victimized. For example, many African American World War II veterans did not apply for government guaranteed mortgages for suburban purchases because they knew that the Veterans Administration would reject them on account of their race, so applications were pointless. Those veterans then did not gain wealth from home equity appreciation as did white veterans, and their descendants could not then inherit that wealth as did white veterans’ descendants. With less inherited wealth, African Americans today are generally less able than their white peers to afford to attend good colleges. If one of those African American descendants now learned that the reason his or her grandparents were forced to rent apartments in overcrowded urban areas was that the federal government unconstitutionally and unlawfully prohibited banks from lending to African Americans, the grandchild would not have the standing to file a lawsuit; nor would he or she be able to name a particular party from home damages could be recovered.

Another impetus toward proceduralism comes from the interaction of the court system with the internal procedures of organizations. Under the theory of legal endogeneity, organizations enact procedural protections, such as diversity training programs, with the putative aim of curbing discrimination; over time, courts gradually come to mistake these procedural and symbolic compliance-oriented activities for substantive measures; but once these symbolic measures themselves attain legal significance, substantive concerns have been pushed outside the scope of legitimate debate.Lauren B Edelman et al., When Organizations Rule: Judicial Deference to Institutionalized Employment Structures,” American Journal of Sociology 117, no. 3 (2011): 888–954, https://doi.org/10.1086/661984.

Further, even substantive change at individual organizations may not imply structural change—that is, a change to the underlying factors in society that produce disparities in the first place. Even if an employer achieved statistical parity in hiring and promotion rates, the application rates might themselves reflect unequal opportunity in society and/or discrimination at previous levels or stages of the system, and there is little the law can do to compel individual decision makers to remedy these inequalitites.

Legal interventions whose effects are both substantive and structural are rare. One notable example is the impact of Title IX on women’s athletics. The law has been interpreted to not only prohibit discrimination in a narrow sense but also require equity in a number of areas such as scholarships, coaching, and facilities. Arguably as a result of these interventions, women’s athletics in the United States has gradually risen in prestige, weakening the gender hierarchy in athletics, leading to greater parity in athletics even outside the collegiate context.This is not to say that fairness has been achieved in women’s athletics or athletic programs in general: horrific sexual abuse scandals remind us that there is a long way to go. In general, however, these types of substantive interventions have so far proved less feasible than formal ones in part because of the funding they require.

Although we have contrasted procedural interventions with substantive and structural interventions above, the line between them can be murky, and the former can at least function as a toehold to the latter. To the extent that inequality persists because of entrenched policies that maintain an unequal distribution of resources, procedural interventions that allow members of historically oppressed groups to rise to positions of authority might allow them to more effectively alter these policies. Procedural interventions can help reduce the capacity of already advantaged groups from usurping full control over the policy-making process. Still, this is far from an ideal route to change, as it places the burden of advancing the interests of specific groups on individuals who belong to those groups.

Another seeming contrast is between discrimination law and redistributive policies, i.e., the government directly taxing certain actors and reallocating those funds to the disadvantaged group. But discrimination law can be understood to be a mechanism that places the economic burden of rectifying past injustice to some extent on employers, lenders, etc. In some ways, this might be similar to a policy of taxing employers and using those funds to support groups that have been subject to discrimination in the past.

Affirmative action policies, in particular, occupy a space that is squarely in between formal nondiscrimination and redistributive policies. An example of such a policy would be a job training program offered by an employer that favored groups with lower access to opportunities.“Steelworkers v. Weber” (Supreme Court, 1979). However, except in rare cases, the law does not compel affirmative action by private entities but merely allows it. More commonly seen are affirmative requirements for governments. The Fair Housing Act, in addition to nondiscrimination mandates, requires HUD and recipients of federal funds from HUD to “affirmatively further” the policies and purposes of the act. This might enable, for instance, subsidized housing in high-income communities that opens up access to higher-quality schools and amenities. However, this part of the FHA has largely lain dormant. Thus, at least some of the limitation of the law in creating meaningful change can be attributed to the lack of political will to fully act upon existing laws, rather than an inherent limitation of the legal system.

Regulating machine learning

Although U.S. antidiscrimination law predates the widespread use of machine learning, it is just as applicable if a decision maker uses machine learning or other statistical techniques. That said, machine learning introduces many complications to the application of these laws. These complications are being vigorously debated in the legal scholarship, and many scholars are concerned that existing law may be inadequate to address the types of discrimination that arise when machine learning is involved. At the same time, there is also an opportunity to exercise new regulatory tools to rein in algorithmic discrimination. There is little case law on this topic, so our discussion of these issues will be based on legal scholarship. As before, our discussion is U.S.-centric but we touch upon the EU’s General Data Protection Regulation (GDPR) in a few places.

Disparate treatment

Recall that the two main anti-discrimination doctrines are disparate treatment and disparate impact. Disparate treatment is principally concerned with the explicit intent to discriminate on the basis of legally protected characteristics; in contrast, disparate impact focuses on decision-making where there is no explicit intent to discriminate, but where even decisions made on the basis of seemingly benign characteristics nevertheless results in unjustified disparities along characteristics that are legally protected.

Most reports of discrimination in machine learning have been cases of unintentional rather than intentional discrimination. Besides, developers of machine learning systems who intend to discriminate are unlikely to rely explicitly on protected attributes due to the easy availability of proxies. When this happens, it can be hard to prove that there was an intent to mask discrimination. For these reasons, disparate treatment is rarely invoked and disparate impact is seen as much more relevant. We will return to disparate impact shortly. But one important question involving disparate treatment relates to systems that explicitly rely on the protected attribute to correct data biases or mitigate the effects of past discrimination. Does this constitute disparate treatment? In other words, does law impose limits on algorithmic fairness interventions?

The answer is nuanced. One relatively bright line in the law is that selection quotas are unconstitutional. In machine learning terms, this roughly maps to the difference between techniques that aim to enforce parity and those that merely penalize disparity during the optimization step. The latter type of technique is analogous to a process that is race conscious and values diversity but still allows the final distribution to vary depending on the set of candidates. It is helpful, as always, to remember that technical distinctions rarely map cleanly to legal determinations.

There is also a major difference between an individual decision made on the basis of a protected attribute and an overall policy that takes the interests of protected groups into account. Disparate treatment primarily applies to the former type of decision. This is similar to the distinction between the use of a protected attribute at training versus test time, although, again, this distinction by itself is far from legally determinative.

A Supreme Court case that is often cited as an example of the tension between disparate treatment and disparate impact (and the disparate-treatment pitfalls of race-conscious decision making) is Ricci v. DeStefano. The case arose because the New Haven fire department scrapped a promotional exam after finding that Black firefighters had a lower passage rate than White firefighters. The department worried that it would open itself to disparate impact liability. But it was then sued by the White and Hispanic firefighters who would have qualified for promotion based on the exam. The court agreed with the plaintiffs that the department had engaged in disparate treatment against them.

Pauline Kim notes a crucial distinguishing feature of the Ricci case: the plaintiffs had already invested considerable time and expense in studying for the exam, and thus the department’s actions resulted in concrete harm to specific individuals. In Kim’s view, the court’s logic wouldn’t apply when an employer prospectively makes a change to its hiring practices in order to avoid the potential for disparate impact.Pauline Kim, “Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action,” California Law Review, 2022.

Finally, even if a practice constitutes prima-facie disparate treatment, it may be legal if it is part of a valid affirmative action program, i.e., one that aims to remedy past discrimination. In employment, the Supreme Court has ruled that race- or gender-based affirmative action programs are valid if they seek to eliminate “manifest imbalances” in “traditionally segregated job categories” and do not “unnecessarily trammel” the interests of other candidates. Some scholars have argued that this should hold for voluntary algorithmic affirmative action as well.Jason R. Bent, “Is Algorithmic Affirmative Action Legal?” The Georgetown Law Journal 108, no. 4 (2020): 803–53.

Disparate impact

To understand how disparate impact applies to statistical decision making, we must unpack the legal doctrine. The burden-shifting framework established by the Supreme Court for Title VII employment discrimination claims works as follows.“Griggs v. Duke Power Co.” (Supreme Court, 1971). First, the plaintiff must establish a prima-facie case by showing a sufficient difference in selection rates between different groups. What constitutes a sufficient difference is unclear. The EEOC has established a threshold of four-fifths (i.e. a difference of 20%) as a guideline, but this is not a strict rule. In a big-data world, some commentators have argued that the criterion should be based on statistical significance of the difference rather than the magnitude.In fact, statistical significance has always been a part of the EEOC criteria alongside substantive significance.

If the plaintiff is successful at showing a sufficient difference, burden shifts to the defendant, who must then establish that the challenged practice is “job related” and consistent with “business necessity”. If the defendant can show this, then the plaintiff can still win by showing that there is an “alternative employment practice” that would have achieved the employer’s aims while being less discriminatory.

The critical step from the perspective of statistical decision making is the question of business necessity. One way the employer can show this is through “empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.” Since machine learning is a technique for establishing predictive validity, commentators such as Barocas and Selbst suggest that this represents an exceedingly low bar for employers.Solon Barocas and Andrew D Selbst, “Big Data’s Disparate Impact,” UCLA Law Review, 2016. As long as the target variable used in a predictive model is putatively job-related, the requirement is satisfied.

On the other hand, Pauline Kim argues that Title VII can in fact effectively address discriminatory effects of machine learning, based on a close reading of the statute.Pauline T. Kim, “Data-Driven Discrimination at Work,” William & Mary Law Review 58, no. 3 (2017): 857–936. However, the doctrine that has developed since its passage is a poor fit for addressing discriminatory machine learning. For example, the requirement for the plaintiff to identify a specific employment practice that caused the disparity developed in an era when written tests were the primary vehicle for disparate impact. But when a statistical model is at play, especially an uninterpretable one that uses a large number of features, it is not clear what the plaintiff is supposed to identify. Thus, the doctrine will have to evolve if Title VII is to address discriminatory machine learning.

Another issue that’s specific to automated decision making arises from the fact that the software is usually not developed in-house by the decision maker but rather by specialized external firms. For example, companies such as Hirevue and Pymetrics offer tools to automate part of the hiring process and Upstart provides a predictive model for loan underwriting. In such cases, who should bear liability? In employment law, employers, not vendors, bear legal liability.Manish Raghavan et al., “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices,” in Conference on Fairness, Accountability, and Transparency, 2020, 469–81. But employers (and other clients of these tools) resist this since they usually lack the expertise to conduct statistical validation. Shifting some or all of the liability from clients to vendors would have pros and cons from an anti-discrimination perspective. It might mean that vendors become much more careful at testing their offerings. On the other hand, even if a tool has been broadly tested for disparate impact, it may perform differently in the context of a particular employer’s applicant pool. Further, plaintiffs may face even more difficulty in showing an alternative employment practice.

While disparate treatment and disparate impact are the two main prongs of anti-discrimination law, when it comes to data-driven decisions the anti-discrimination toolbox is wider and includes privacy law, explanation, and potentially consumer protection law. We discuss these in turn.

Privacy

When we worry about privacy, the underlying concern is often that data about us could be used to discriminate or might result in adverse treatment. For example, if a job interviewer inquired about religion, it may be considered a privacy violation. The harm that animates this worry is the denial of a job. As another example, reports that the retailer Target uses shopping records to identify pregnant customers sparked outrage.Charles Duhigg, The Power of Habit: Why We Do What We Do in Life and Business Paperback (New York, NY: Random House, 2014). The potential for harm arises because pregnancy is a time when individuals are particularly susceptible to manipulation through marketing (which is the reason that marketers are interested in pregnancy in the first place).

Yet, data privacy law and anti-discrimination law have largely been separate in the United States. Returning to the above example, it is not privacy law that forbids interviewers from asking about religion. Rather, since employment law forbids discriminating on the basis of religion, interpretive guidance from the EEOC and, often, from institutions themselves discourages such questions during interviews.An early attempt to mingle privacy and accountability (but not antidiscrimination) goals is seen in t he Fair Information Practice Principles. The FIPPS contain the seeds of comprehensive data protection laws enacted around the world. In the U.S., they do not have the force of law, except in some sectoral laws such as the Fair Credit Reporting Act. A watered-down version of FIPPS focusing on “notice and choice” governs U.S. online commerce. (Martha K. Landesberg et al., “Privacy Online: A Report to Congress (Washington, DC: Federal Trade Commission, 1998), https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf)

Still, given the normative alignment, it is natural to wonder whether privacy law can be adapted to serve antidiscrimination ends. There is a lot of intuitive appeal to this idea, especially when it comes to machine learning. If a decision making system relies on data, why not put restrictions on the flow of data to prevent unjustified discrimination?

But when we examine this argument in more detail, difficulties emerge. The most obvious is the issue of proxies. As we discussed in Chapter 3 , prohibiting access to sensitive attributes such as race or gender typically has a negligible impact on a classifier when rich datasets are available. It isn’t just that the decision maker may train a model to predict the sensitive attribute from innocuous attributes, as in the Target example above. It may instead directly use the innocuous attributes to predict the outcome of interest, such as the susceptibility of a particular person to a particular marketing message. This is in fact exactly what has been shown to happen on Facebook-scale advertising platforms.Muhammad Ali et al., “Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes,” Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (2019): 199.

If proxies are the problem, another approach is to prohibit the collection of proxies. This is the idea behind “ban the box” laws in U.S. states that prohibit employers from inquiring about criminal history. Ban-the-box has two motives. One is to make it easier for formerly incarcerated people to be rehabilitated into society through employment. In this view, criminal history itself can be seen as the sensitive attribute. The other motive is to combat the racially disparate impact of discrimination against fomerly incarcerated people. Here, criminal history can be seen as a proxy for race. It is this view that is of interest to us.

An influential study by Agan and Starr found that employers increased racial discrimination when they were subject to ban-the-box laws.Amanda Agan and Sonja Starr, Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment,” The Quarterly Journal of Economics 133, no. 1 (2017): 191–235, https://doi.org/10.1093/qje/qjx028. What does this mean for the prospect of preventing discrimination by prohibiting information flows? One view is that in the light of this finding, ban-the-box laws clearly harm more than they help. But another perspective is that racial discrimination is already unlawful, so what the study really reveals is the need to step up auditing and enforcement. If that were to happen, ban-the-box laws might be able to achieve their intended effects.

Going beyond protected attributes and their proxies, privacy law may make it harder to amass dossiers on individuals (for example, containing shopping or browsing records), and we might hope that this would make discrimination harder. While a full discussion of this is beyond our scope, U.S. privacy law is often criticized for failing to accomplish this effectively, for several reasons. There is no general federal privacy law analogous to the GDPR in the E.U. Only a few sectoral privacy laws exist, such as the Health Insurance Portability and Accountability Act (HIPAA). Privacy in most commercial transactions or interactions comes down to “notice and choice”, which is ineffective for many reasons including the power asymmetry and information asymmetry between firms and individuals. In the machine learning context, the notice and choice approach to privacy is particularly ineffective as a barrier to firms building models that may infer sensitive attributes or make adverse decisions based on innocuous attributes. That’s because of the “tyranny of the minority”: it takes only a small number of individuals to consent to collection to be able to uncover the statistical patterns that make such inferences possible.Solon Barocas and Helen Nissenbaum, “Big Data’s End Run Around Anonymity and Consent,” in Privacy, Big Data, and the Public Good: Frameworks for Engagement, ed. Julia Lane et al. (New York, NY: Cambridge University Press, 2014), 44–75, https://doi.org/10.1017/cbo9781107590205.004.

While privacy laws have not so far helped to address discrimination, discrimination law has sometimes helped to preserve privacy. The Genetic Information Nondiscrimination Act is an anti-discrimination law that mutated into a privacy law through expansive court decisions and EEOC interpretation.Bradley A. Areheart and Jessica L. Roberts, GINA, Big Data, and the Future of Employee Privacy,” Yale Law Journal 128, no. 3 (2019): 710–90, https://doi.org/10.14293/s2199-1006.1.sor-compsci.clulswf.v1. Genetic information is an exception to the ubiquity of proxies, as it cannot readily be inferred with any degree of completeness or accuracy from observable characteristics.

Broader senses of the word privacy go beyond information flow and encompass transparency, explanation, and redress. We turn to those next.

Explanation

In the context of automated decision making, explanation could have one of two goals. The first is an explanation of the overall system. In a rule-based system this might be the set of decision rules. In a machine learning system it’s less obvious what form this explanation should take, and it is a subject of active research in the field of interpretable machine learning.

An explanation of the overall system promotes fairness objectives because it allows regulators, users, and developers to check whether the system adheres to normative requirements. In many cases, explanation allows us to immediately spot potential unfairness. For example, if we know that a system used for detecting fake accounts on social media relies on an uncommon name as a signal of inauthenticity, it is easy to see why it may be more likely to incorrectly flag users who are from minority cultures, as we discussed in Chapter 1.

The second goal of explanation of how a particular decision was made given the characteristics of the decision subject. This goal can also promote fairness objectives. It satisfies a powerful innate need to understand how consequential decisions about us are made. The dread that arises when a decision system denies us such explanation is visceral enough that it has a name: Kafkaesque. Explanation of individual decisions also serves more instrumental purposes. It allows us to contest decisions that may have been made on the basis of erroneous information. Even if the decision was accurate, explanation allows recourse, that is, actions that decision subjects might take to alter the decision in the future. For example, a loan applicant who was denied because of a low credit score may make attempts to improve that score.

Taking a step back, decision systems can be analyzed at three levels. The highest level is that of values, goals, and normative constraints (for example, maximizing predictive accuracy while ensuring fairness). The second is the design of the system and its rules. The third is the level of individual decisions. Justification is needed at all three levels. In traditional decision making systems, values and goals derive legitimacy through stakeholder participation, deliberation, and democratic debate. It is often merged with the next step, rulemaking or policymaking, which is the process of going from the first level to the second—designing a decision system based on values and goals. If the first step was skipped, tensions between different values or between different stakeholders’ objectives become apparent in this process. In administrative bureaucracies, they are resolved through a process of public participation.Katherine J. Strandburg, “Rulemaking and Inscrutable Automated Decision Tools,” Columbia Law Review 119, no. 7 (1999): 1851–86. In contrast, the process of adjudication bridges the second and the third levels.Katherine J. Strandburg, “Adjudicating with Inscrutable Decision Tools,” in Machines We Trust: Perspectives on Dependable AI, ed. Marcello Pelillo and Teresa Scantamburlo (Cambridge, MA: MiT Press, 2021). For example, in the United States, bureaucrats periodically assess the value of homes and other real estate based on an elaborate policy in order to determine how much property tax should be levied. If the owner disagrees with the assessment, they can appeal, and have a right to a hearing.

Automated systems erode the procedural protections involved in rulemaking and adjudication: public participation and appeals respectively.Danielle Keats Citron, “Technological Due Process,” Wash. UL Rev. 85 (2007): 1249. These issues are exacerbated when machine learning is involved, due to its inscrutability and non-intuitiveness.Andrew D Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines,” Fordham Law Review 87, no. 3 (2018): 1085. The two goals of explanation might help mitigate these concerns: by allowing us to understand how the overall system and policy conform to normative constraints and how individual decisions conform to the policy. These roughly correspond to the distinction between “global” and “local” interpretability in the technical literature.

Requirements for both flavors of explanation can be seen in existing laws. The FCRA and ECOA contain an “adverse action notice” requirement. This is an example of the second goal, as it pertains only to the individual decision and does not require transparency about the overall model. In contrast, the GDPR requires “meaningful information about the logic involved” if an individual is subject to a consequential decision by an automated system. This is generally understood as requiring some degree of explanation of both the overall model and the specific decision.

Comparison of the two flavors of model explanation
Explanation of overall system Explanation of specific decisions
Goal Justify policy based on goals and values Justify decision based on policy
Bureaucratic analog Rulemaking or policymaking Adjudication
Technical tool Global interpretability Local explanation
Example legal requirement GDPR: “meaningful information about the logic involved” FCRA and ECOA: adverse action notice

Selbst and Barocas describe several limitations to the usefulness of explanations. We highlight two main ones. The first is the difficulty of producing explanations that are simultaneously faithful to the model and understandable to a nonexpert. If a credit model combines dozens of variables in nonlinear ways, a reason such as “length of employment” or “insufficient income” might fall far short of fully explaining a decision; yet this is all that is required of adverse action notices. Conversely, an explanation of a decision that is fully faithful to a statistical model may be incomprehensible to most decision subjects.

There is an important distinction between explanations given willingly and those demanded by law of a decision maker who has no other incentive to provide them. So far, it has proven challenging for regulators to set legal requirements for what constitutes a good explanation and assess whether they are working as intended. Empirical evidence supports the difficulty of compelling unwilling decision makers to provide meaningful explanations. For example, a 2018 study found that Facebook’s “Why am I seeing this?” ad explanations are vague, incomplete, misleading, and generally useless.Athanasios Andreou et al., “Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebook’s Explanations,” in NDSS 2018 - Network and Distributed System Security Symposium, 2018. The research literature shows that if Facebook wanted to provide good explanations, it is possible to do far better.

A more fundamental limitation described by Selbst and Barocas is that even explanations that are faithful and understandable may not enable normative assessment. If an employer uses a screening model that computes a score based on some keywords (a faithful and understandable explanation), it is normatively important to know whether those keywords represent job-related skills, or act as proxies (for example, hobbies) that signal social class, or something else. We may be able to make such an assessment given the keywords, but it is not straightforward. Modern methods that provide explanations based on high-level concepts rather than low-level features hold promise in this regard, but the gap between explanations and a full normative justification is likely to remain.

Because of these limitations, there has been a gradual turn from explanations to algorithmic impact assessments (AIAs). A full discussion of AIAs is beyond our scope, but we point out how AIAs, at least in an idealized version, differ from explanations. First, AIAs go beyond explaining the model itself and focus on how it was created, how it will be used, and what impacts it is likely to have. Second, the primary consumers of AIAs are not decision subjects but rather regulators and other experts, which alleviates the faithfulness-comprehensibility tradeoff to some degree. Third, AIAs must be performed before the model is deployed and must be updated periodically. Some visions of AIAs call for the involvement of impartial external parties in producing them.

The GDPR incorporates one version of AIAs, namely Data Protection Impact Assessments (DPIAs). DPIAs must include a description of the algorithm, and the purpose of the processing, an assessment of the necessity of processing in relation to the purpose, an assessment of the risks to individual rights and freedoms, and the measures a company will use to address these risks. It requires consultation “where appropriate” with impacted individuals. But DPIAs are not required to be released to the public. It is too early to tell how effective they will be in practice; much will rely on the behavior of regulators.Margot E Kaminski and Gianclaudio Malgieri, “Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations,” International Data Privacy Law 11, no. 2 (2020): 125–44, https://doi.org/10.1093/idpl/ipaa020; Andrew D. Selbst, “An Institutional View of Algorithmic Impact Assessments,” Harvard Journal of Law & Technology 35, no. 1 (2021): 117–91.

Algorithmic Impact Assessments are closely related to audits and the terms are sometimes used interchangeably. Nonetheless, there are several types that are worth distinguishing. A 2020 report classifies them into four categories:Ada Lovelace and UK DataKind, “Examining the Black Box: Tools for Assessing Algorithmic Systems” (Technical report, Ada Lovelace Institute, 2020).

  • Bias audits conducted by researchers, journalists, or civil society organizations (inspired by social science audits, as we saw in the chapter on testing discrimination in practice).
  • Regulatory audits conducted by regulators with statutory powers to examine internal data and systems, modeled on financial audits.
  • Algorithmic risk assessments conducted by the developer or procurer of a tool, modeled on environmental impact assessments, to assess possible risks and mitigation strategies before deploying a system.
  • Algorithmic impact evaluations, which are retrospective and modeled on policy evaluations, conducted typically by public sector agencies w.r.t. algorithms which implement a policy.

Algorithmic impact assessments and audits are a burgeoning areaSasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini, “Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem,” in Conference on Fairness, Accountability, and Transparency, 2022, 1571–83., and their potential is still being explored. For example, Ifeoma Ajunwa ambitiously argues for reading in existing employment law a duty of care that would obligate employers to conduct audits of automated hiring systemsIfeoma Ajunwa, “An Auditing Imperative for Automated Hiring,” Harvard Journal of Law & Technology 34, no. 2 (2021): 621–299.. Malgieri and Pasquale propose an ex-ante model of regulation where developers of consequential AI systems must perform a risk assessment before deployment and, in some cases, be required to get it approved by an authority[i@malgieri2022transparency]. These developments illustrate our point that the turn to machine learning, while indeed creating challenges for antidiscrimination law, also creates opportunities alongside it. The software tool and data records involved in automated systems provide a leverage point for regulators.

Consumer protection

Consumer protection law has completely different roots from either antidiscrimination law or privacy law. Consumer movements first gained ground in the United States in the early 20th century, initially due to food safety issues.Upton Sinclair, The Jungle (New York, NY: Penguin, 2006). The Federal Trade Commission was established in 1914. Although it initially focused on antitrust, consumer protection gradually became an equally important prong of its activities. It has been the primary agency responsible for consumer protection, and has the statutory authority to challenge “unfair or deceptive” practices in commerce. It is this authority that the agency uses to carry out the activities that it is well known for, such as policing false advertising and fraud, especially identity theft.Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy (New York, NY: Cambridge University Press, 2016). Many states have consumer protection laws with similar import, enforced by attorneys general.

Credit regulation is one area of consumer protection law that also serves fairness purposes, understood in a broad sense. The Fair Credit Reporting Act of 1974 narrows the permissible uses of credit reports so that they are not used for arbitrary purposes. It gives consumers ways to contest inaccuracies in the data, considering that they are used to make consequential decisions. And it requires notifying the consumer when adverse action is taken against them. FCRA does not address discrimination in the sense of disparate treatment or disparate impact; that would come later, in the Equal Credit Opportunity Act of 1976. In other areas such as employment law, consumer protection does not currently play a role, although scholars have speculatively advocated for treating job candidates as consumers.Ajunwa, “The Paradox of Automation as Anti-Bias Intervention.”

Outside the traditional sectors of antidiscrimination law, there is a vast swath of everyday digital products in which machine learning biases manifest, and this is where consumer protection law is potentially highly relevant. For example, if a face unlock feature on a smartphone is substantially less accurate for some groups of users, this is not a violation of any of the sector-specific statutes we’ve discussed so far, but it may fall under FTC authority. Even in domains such as employment discrimination, there are peculiar gaps such as the fact that vendors of algorithmic screening tools are not covered entities, and consumer protection law can potentially help fill this gap.

As of this writing, this is all speculative. So far, the FTC hasn’t gone after discriminatory practices except when the company also violates an anti-discrimination statute such as ECOA, which the FTC has authority to enforce.Statement of Commissioner Rebecca Kelly Slaughter In the Matter of Liberty Chevrolet, Inc. d/b/a Bronx Honda (Office of Commissioner Rebecca Kelly Slaughter, Federal Trade Commission, 2020). The term “unfairness” in the FTC act has traditionally meant something quite different: taking unjustified advantage of consumers that they cannot avoid. The prototypical example of an unfair commercial practice would be selling snake oil, and a more modern example would be lax data security leading to a data breach. But note that unlike the anti-discrimination statutes, the FTC has substantially more power to determine what constitutes deceptive and unfair. It is quite possible that the agency will take a broad view of unfairness and that the courts will permit it. The statute allows the FTC to look to “established public policies” in determining what is unfair. It has been suggested that the FTC can thus look to antidiscrimination statutes and rules as scaffolding to build a framework for making determinations about algorithmic discrimination.Andrew D Selbst and Solon Barocas, “Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law,” University of Pennsylvania Law Review 171 (2023).

The FTC’s deception authority offers a clearer but more circumscribed option. Companies often make affirmative claims about their products being unbiased. If those claims turn out to be false, that’s deception. The same goes for false claims about products being effective. This is relevant since many predictive decision making tools on the market lack evidence of predictive validity, which means that they may subject people to arbitrary decisions. Yet, unless those arbitrary decisions are also systematically biased, they are difficult to challenge under antidiscrimination law. Finally, a lack of transparency may also constitute a deceptive practice. Indeed, the FTC took action against a company that trained a face recognition model on its users’ photos while falsely telling them that the feature was opt-in.Decision and Order, In re Everalbum, Inc., File No. 192-3172 (Federal Trade Commission, 2021).

Historically, the FTC has had a roller-coaster ride in terms of how broadly it treats its authority and how much it flexes its muscles. After being ineffective in the ‘60s and reinvigorated in the ‘70s,Robert Pitofsky, “Past, Present, and Future of Antitrust Enforcement at the Federal Trade Commission,” The University of Chicago Law Review 72, no. 1 (2005): 209–27. Congress rebuked it in the early ‘80s and limited its authority due to lobbying by powerful business interests.Hoofnagle, Federal Trade Commission Privacy Law and Policy. It has remained cautious since, and was further caught off guard in the technology era due to limitations of in-house technical expertise. This led to withering criticism for failures such as allowing Cambridge Analytica’s exfiltration of Facebook users’ data, despite the FTC long being aware of similar previous events and supposedly closely monitoring Facebook under a “consent decree”. In the 2020s, the agency has shown some signs of being invigorated. Specifically on algorithmic discrimination, it published a blog post containing surprisingly strong language.Elisa Jillson, “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI,” April 2021, https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai. A whitepaper co-authored by a sitting commissioner also lays out an ambitious agenda.Rebecca Kelly Slaughter, Janice Kopec, and Mohamad Batal, “Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission,” Yale JL & Tech. 23 (2020): 1. All this is to say: the relevance of consumer protection law to algorithmic discrimination remains a wild card.

Beyond the traditional conception of consumer protection, there are emerging ideas such as a duty of loyalty for companies who are entrusted with customers’ data.Neil Richards and Woodrow Hartzog, “A Duty of Loyalty for Privacy Law,” Washington University Law Review 99, no. 3 (2021): 961–1021. Such companies would be obligated to act in the best interests of people exposing their data and online experiences. The duty of loyalty is a common obligation in fiduciary relationships (for example, a lawyer owes such a duty to her client). But its application to the holders of personal data is a relatively new idea. Although it has been proposed mainly with the aim of improving privacy and minimizing manipulative practices such as “dark patterns”, it would have some implications for nondiscrimination as well.

Concluding thoughts

We’ve covered a lot of ground in this chapter. We reviewed how the various civil rights movements together gave rise to a relatively robust body of anti-discrimination law in the U.S. Generally, this law aims to strike a balance between preventing (and remedying) discrimination on the other hand, and avoiding excessive burdens on decision makers on the other hand. It has been refined, contested, and implemented over decades by the push and pull of court decisions, regulatory agencies, institutional bureaucrats, continued civil rights activism, and shifts in public opinion. It has important limitations: practically, private plaintiffs find it difficult to find legal recourse; more fundamentally, the law itself is far from an ideal route to bring about structural changes.

Turning to the novel challenges raised by automated decision making, there is a risk that discriminatory machine learning might slip through the gaps in how the law conceives of discrimination. In our view, this risk is counterbalanced by the expanded legal toolkit available: privacy law, requirements regarding explanation and impact assessment, and consumer protection law. So far, this potential has lain mostly dormant for various reasons: a narrow conception of privacy, a lack of broad legislation in the U.S. requiring explanation of consequential decisions, and the timidity of consumer protection agencies. This could yet change; it is possible that the law and enforcement agencies could be reformed to effectively address the new problems. At a minimum, even if not enshrined into law, the tools for intervention that we’ve discussed offer a blueprint for public interest advocates seeking to hold companies accountable.

References

Agan, Amanda, and Sonja Starr. Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment.” The Quarterly Journal of Economics 133, no. 1 (2017): 191–235. https://doi.org/10.1093/qje/qjx028.
Ajunwa, Ifeoma. “An Auditing Imperative for Automated Hiring.” Harvard Journal of Law & Technology 34, no. 2 (2021): 621–299.
———. “The Paradox of Automation as Anti-Bias Intervention.” Cardozo Law Review 41, no. 5 (2020): 1671–1742.
Ali, Muhammad, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. “Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes.” Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (2019): 199.
Andreou, Athanasios, Giridhari Venkatadri, Oana Goga, Krishna Gummadi, Patrick Loiseau, and Alan Mislove. “Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebook’s Explanations.” In NDSS 2018 - Network and Distributed System Security Symposium, 2018.
Areheart, Bradley A., and Jessica L. Roberts. GINA, Big Data, and the Future of Employee Privacy.” Yale Law Journal 128, no. 3 (2019): 710–90. https://doi.org/10.14293/s2199-1006.1.sor-compsci.clulswf.v1.
Bagenstos, Samuel R. “Formalism and Employer Liability Under Title VII.” University of Chicago Legal Forum 2014, no. 1 (2014): 145–76.
Barocas, Solon, and Helen Nissenbaum. “Big Data’s End Run Around Anonymity and Consent.” In Privacy, Big Data, and the Public Good: Frameworks for Engagement, edited by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, 44–75. New York, NY: Cambridge University Press, 2014. https://doi.org/10.1017/cbo9781107590205.004.
Barocas, Solon, and Andrew D Selbst. “Big Data’s Disparate Impact.” UCLA Law Review, 2016.
Bent, Jason R. “Is Algorithmic Affirmative Action Legal?” The Georgetown Law Journal 108, no. 4 (2020): 803–53.
“Bostock v. Clayton County, Georgia.” Supreme Court, 2020.
Cairns, John W. “Credit Equality Comes to Women: An Analysis of the Equal Credit Opportunity Act.” San Diego Law Review 13, no. 4 (1976): 960–77.
Chay, Kenneth Y. The Impact of Federal Civil Rights Policy on Black Economic Progress: Evidence from the Equal Employment Opportunity Act of 1972.” Industrial & Labor Relations Review 51, no. 4 (1998): 608–32. https://doi.org/10.1177/001979399805100404.
Citron, Danielle Keats. “Technological Due Process.” Wash. UL Rev. 85 (2007): 1249.
“Civil Rights Principles for Hiring Assessment Technologies.” The Leadership Conference Education Fund, 2020. https://civilrightsdocs.info/pdf/policy/letters/2020/Hiring_Principles_FINAL_7.29.20.pdf.
Costanza-Chock, Sasha, Inioluwa Deborah Raji, and Joy Buolamwini. “Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem.” In Conference on Fairness, Accountability, and Transparency, 1571–83, 2022.
Dau-Schmidt, Kenneth Glenn, and Ryland Sherman. The Employment and Economic Advancement of African-Americans in the Twentieth Century.” Jindal Journal of Public Policy 1, no. 2 (2013): 95–116.
Decision and Order, In re Everalbum, Inc., File No. 192-3172.” Federal Trade Commission, 2021.
Duhigg, Charles. The Power of Habit: Why We Do What We Do in Life and Business Paperback. New York, NY: Random House, 2014.
Edelman, Lauren B, Linda H Krieger, Scott R Eliason, Catherine R Albiston, and Virginia Mellema. When Organizations Rule: Judicial Deference to Institutionalized Employment Structures.” American Journal of Sociology 117, no. 3 (2011): 888–954. https://doi.org/10.1086/661984.
Education, Department of. “Enforcement of Title IX of the Education Amendments of 1972 with Respect to Discrimination Based on Sexual Orientation and Gender Identity in Light of Bostock v. Clayton County.” Federal Register, 2021.
Eskridge, William N. Dishonorable Passions: Sodomy Laws in America, 1861-2003. New York, NY: Viking, 2008.
“Executive Order No. 8802.” Federal Register, 1941.
Eyer, Katie. “The but-for Theory of Anti-Discrimination Law.” Va. L. Rev. 107 (2021): 1621.
Eyer, Katie R. “That’s Not Discrimination: American Beliefs and the Limits of Anti-Discrimination Law.” Minnesota Law Review 96 (2012): 1275–1362. https://doi.org/10.1093/acprof:oso/9780199698462.003.0001.
“Facebook, Inc.” Department of Housing; Urban Development, 2019.
Fleischer, Doris Zames, and Frieda Zames. The Disability Rights Movement: From Charity to Confrontation. JSTOR, 2013.
Friedan, Betty. The Feminine Mystique. New York, NY: W. W. Norton & Company, 2001.
Graham, Hugh Davis. “The Storm over Grove City College: Civil Rights Regulation, Higher Education, and the Reagan Administration.” History of Education Quarterly 38, no. 4 (1998): 407–29. https://doi.org/10.2307/369849.
“Griggs v. Duke Power Co.” Supreme Court, 1971.
Halpern, Stephen C. On the Limits of the Law: The Ironic Legacy of Title VI of the 1964 Civil Rights Act. Baltimore, MD: Johns Hopkins University Press, 1995.
“Heart of Atlanta Motel, Inc. V. United States.” Supreme Court, 1964.
Hoofnagle, Chris Jay. Federal Trade Commission Privacy Law and Policy. New York, NY: Cambridge University Press, 2016.
House of Representatives Report No. 914 Pt 2, 88th Congress, 1st Session,” 1963.
Jillson, Elisa. “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI,” April 2021. https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
Kaminski, Margot E, and Gianclaudio Malgieri. “Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations.” International Data Privacy Law 11, no. 2 (2020): 125–44. https://doi.org/10.1093/idpl/ipaa020.
Katz, Martin J. “Unifying Disparate Treatment (Really).” Hastings Law Journal 59, no. 3 (2008): 643.
Keele, Luke, William Cubbison, and Ismail White. “Suppressing Black Votes: A Historical Case Study of Voting Restrictions in Louisiana.” American Political Science Review 115, no. 2 (2021): 694–700.
Kim, Pauline. “Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action.” California Law Review, 2022.
Kim, Pauline T. “Data-Driven Discrimination at Work.” William & Mary Law Review 58, no. 3 (2017): 857–936.
Landesberg, Martha K., Toby Milgrom Levin, Caroline G. Curtin, and Ori Lev. “Privacy Online: A Report to Congress.” Washington, DC: Federal Trade Commission, 1998. https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf.
“Lawrence v. Texas.” Supreme Court, 2003.
Lovelace, Ada, and UK DataKind. “Examining the Black Box: Tools for Assessing Algorithmic Systems.” Technical report, Ada Lovelace Institute, 2020.
Lucas, Lauren Sudeall. Identity as a Proxy.” Columbia Law Review 115, no. 6 (2015): 1605–74.
Melnick, R. Shep. Analyzing the Department of Education’s final Title IX rules on sexual misconduct.” Washington, DC: The Brookings Institution, June 2020. https://www.brookings.edu/research/analyzing-the-department-of-educations-final-title-ix-rules-on-sexual-misconduct/.
Menand, Louis. “The Changing Meaning of Affirmative Action.” The New Yorker, January 2020. https://www.newyorker.com/magazine/2020/01/20/have-we-outgrown-the-need-for-affirmative-action.
Moss, Scott A. “Fighting Discrimination While Fighting Litigation: A Tale of Two Supreme Courts.” Fordham Law Review 76, no. 2 (2007): 981–1013.
“Obergefell v. Hodges.” Supreme Court, 2015.
Okoro, Catherine A., NaTasha D. Hollis, Alissa C. Cyrus, and Shannon Griffin-Blake. Prevalence of Disabilities and Health Care Access by Disability Status and Type Among Adults — United States, 2016.” Morbidity and Mortality Weekly Report 67, no. 32 (2018): 882–87. https://doi.org/10.15585/mmwr.mm6732a3.
Padden, Carol, and Tom Humphries. Inside Deaf Culture. Cambridge, MA: Harvard University Press, 2006.
Pitofsky, Robert. “Past, Present, and Future of Antitrust Enforcement at the Federal Trade Commission.” The University of Chicago Law Review 72, no. 1 (2005): 209–27.
“Plessy v. Ferguson.” Supreme Court, 1896.
Porter, Nicole Buonocore. “A New Look at the ADA’s Undue Hardship Defense.” Mo. L. Rev. 84 (2019): 121.
Price, Robert N. Griggs v. Duke Power Co.: The first landmark under Title VII of the Civil Rights Act of 1964.” Southwestern Law Journal 25, no. 3 (2016): 484–93.
“Questions and Answers about the EEOC’s Enforcement Guidance on Pregnancy Discrimination and Related Issues,” June 2015. https://www.eeoc.gov/laws/guidance/questions-and-answers-about-eeocs-enforcement-guidance-pregnancy-discrimination-and.
Raghavan, Manish, Solon Barocas, Jon Kleinberg, and Karen Levy. “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices.” In Conference on Fairness, Accountability, and Transparency, 469–81, 2020.
Richards, Neil, and Woodrow Hartzog. “A Duty of Loyalty for Privacy Law.” Washington University Law Review 99, no. 3 (2021): 961–1021.
“Roe v. Wade.” Supreme Court, 1973.
Rosenblat, Alex, Karen EC Levy, Solon Barocas, and Tim Hwang. “Discriminating Tastes: Customer Ratings as Vehicles for Bias.” Data & Society, 2016, 1–21.
Rothstein, Richard. The Color of Law: A Forgotten History of How Our Government Segregated America. New York, NY: W. W. Norton & Company, 2018.
Selbst, Andrew D. “An Institutional View of Algorithmic Impact Assessments.” Harvard Journal of Law & Technology 35, no. 1 (2021): 117–91.
Selbst, Andrew D, and Solon Barocas. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87, no. 3 (2018): 1085.
———. “Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law.” University of Pennsylvania Law Review 171 (2023).
Selmi, Michael. Why are Employment Discrimination Cases So Hard to Win? Louisiana Law Review 61, no. 3 (2001): 555–75.
“Senate Report 589 of the 94th Congress.” United States Senate, 1976.
Sinclair, Upton. The Jungle. New York, NY: Penguin, 2006.
Slaughter, Rebecca Kelly, Janice Kopec, and Mohamad Batal. “Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission.” Yale JL & Tech. 23 (2020): 1.
Sperino, Sandra F., and Suja A. Thomas. Unequal: How America’s Courts Undermine Discrimination Law. New York, NY: Oxford University Press;, 2017.
Statement of Commissioner Rebecca Kelly Slaughter In the Matter of Liberty Chevrolet, Inc. d/b/a Bronx Honda.” Office of Commissioner Rebecca Kelly Slaughter, Federal Trade Commission, 2020.
“Steelworkers v. Weber.” Supreme Court, 1979.
Strandburg, Katherine J. “Adjudicating with Inscrutable Decision Tools.” In Machines We Trust: Perspectives on Dependable AI, edited by Marcello Pelillo and Teresa Scantamburlo. Cambridge, MA: MiT Press, 2021.
———. “Rulemaking and Inscrutable Automated Decision Tools.” Columbia Law Review 119, no. 7 (1999): 1851–86.
“Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc.” Supreme Court, 2015.
Title VI Legal Manual.” Washington, DC: Civil Rights Division U.S. Department of Justice, April 2021. https://www.justice.gov/crt/book/file/1364106/download.
Vaas, Francis J. “Title VII: Legislative History.” BC Indus. & Com. L. Rev. 7 (1965): 431.
Valentin, Iram. Title IX: A Brief History.” Newton, MA: Equity Resource Center, 1997. http://www2.edc.org/womensequity/pdffiles/t9digest.pdf.
Walker, Christopher J. Attacking Auer and Chevron Deference: A Literature Review.” The Georgetown Journal of Law & Public Policy 16, no. 1 (2018): 103–22.
Walker, Juliet E. K. History of Black Business in America: Capitalism, Race, Entrepreneurship. Evolution of Modern Business. New York, NY: Twayne Publishers, 1998.
Wilkerson, Isabel. The Warmth of Other Suns: The Epic Story of America’s Great Migration. New York, NY: Penguin Random House, 2011.
Last updated: Wed Dec 13 14:44:39 CET 2023