Title VII

Litigation Can Help Curb Algorithmic Discrimination in Hiring, Here’s How

Hannah Hilligoss

Hannah Hilligoss is a student at Harvard Law School and a member of the Labor and Employment Lab.

As described in my last post, employers who use algorithmic decision systems (ADSs) in hiring reify past prejudices and systemic discrimination while coating the process in a veneer of objectivity. More troubling, mathematician and social activist Cathy O’Neil predicts these systems will disparately impact already marginalized communities, writing, “the privileged” will be “processed more by people, the masses by machines.” Comprehensive legislation addressing algorithmic discrimination, like the proposed Algorithmic Accountability Act, won’t be enacted quickly enough to stymie the spread of ADSs or to address the very real harms they are already causing. This is where litigation should come in. While not a comprehensive solution, litigation can be used to “paus[e] demonstrably bad ideas,” to raise the profile of algorithmic discrimination by highlighting the voices of those most severely impacted by ADSs, and to provide individual remedy to those harmed.

Federal statutes like Title VII, the ADA, and the ADEA are the most obvious ways to litigate discriminatory ADSs as they are directly aimed at employment discrimination. However, information asymmetries and trade secrecy around ADS technology pose challenges to actions under these statutes. To overcome these hurdles—or at least to slow the spread of ADSs—public enforcement and private actions should also include legal theories under consumer protection, privacy, and constitutional law. This post explores the roles and limits of each of these bodies of law in the effort to eliminate algorithmic discrimination in employment.

Employment Law

Antidiscrimination laws are the obvious choice to address ADS discrimination, so the lack of activity in this arena is surprising. The only prominent ADS litigation under antidiscrimination statutes involve systems that specifically allow users to set parameters in a discriminatory way—like Facebook allowing employers to advertise job postings based on age or “lookalike” audiences that exclude recipients of one race or gender. In these cases, the levers of discrimination are open for everyone, including the EEOC and civil rights advocates, to see and act on.

As with traditional antidiscrimination litigation, information asymmetries between employers and applicants present significant challenges to even identifying employment discrimination, let alone getting a case into court. This is particularly true of ADSs used in recruiting and resume screening because applicants who don’t receive a job advertisement or who don’t get an interview have no employer interaction on which to base a discrimination claim. ADS vendor claims of trade secrecy add an additional challenge to discovering and litigating instances of algorithmic discrimination.

If, however, an applicant knows their information was analyzed by an ADS, Cornell Professor Ifeoma Ajunwa has offered a theory of discrimination per se that could make pleading a case of discrimination easier. Borrowing from Tort’s negligence per se doctrine, Ajunwa argues that the combination of the employer’s affirmative duty not to discriminate, its “superior knowledge” of disparate impact problems, and a failure to audit an ADS for bias should establish prima facie intent to discriminate. This doctrine would help plaintiffs get their foot in the courtroom door because it reduces reliance on the employer for evidence required to state a case under disparate treatment and disparate impact theories.

Another solution to the information asymmetry problem is public enforcement by the EEOC. The EEOC has the power to initiate “directed investigations”  to address systemic discrimination. The 2006 Systemic Discrimination Task Force Report noted that the EEOC’s unique access to data on employment trends and demographics gives it “particular insight … where victims of discrimination often are not aware that they may have been denied employment based on unlawful criteria.”

While EEOC enforcement is promising, its progress on ADSs has been slow despite recent Congressional calls for it to establish guidance and to clarify its authority to investigate bias in ADSs. Furthermore, the EEOC’s own Uniform Guidelines on validating hiring selection procedures may give employers using ADSs an out unless the Guidelines are updated to account for “differential validity” in machine learning systems.

Consumer Protection Law

Consumer protection law won’t help an individual who has suffered from ADS discrimination, but it may be effective in regulating and halting the use of discriminatory ADSs themselves. ADS vendors may be liable under consumer protection law for committing unfair and deceptive practices in violation of the FTC Act. The argument, according to experts, is that ADSs that rely on certain machine learning techniques—including facial recognition technology—that are based on fundamentally flawed science.

Another approach is demonstrated by EPIC’s 2019 FTC complaint arguing that HireVue commits deceptive practices by denying that it uses facial recognition technology even while admitting that it “collects and analyzes ‘[f]acial expressions’ and ‘facial movements’” to measure traits predictive of job candidate success. Earlier this year, HireVue announced that it would stop relying on “facial analysis” to assess job applicants, citing public concern over bias as its motivation. While this is a limited win for ADS opponents, HireVue will still analyze applicants’ biometric data “including speech, intonation, and behavior—all of which present similar privacy and discrimination risks.”

Privacy Law

Privacy laws like the Artificial Intelligence Video Interview Act have recently been passed to address the proliferation of ADSs in hiring and, specifically, the use of video interview software that uses facial recognition technology. While these laws provide much needed privacy protections, they won’t advance core antidiscrimination goals. Large settlements under the Biometric Information Privacy Act (BIPA) and privacy requirements of affirmative consent or data deletion won’t halt the use of these systems or spur more accountability for algorithmic discrimination. Scholars and practitioners alike are skeptical of informed consent requirements more generally because consent to an ADS becomes meaningless when the alternative is being unable to apply for the job at all. Finally, at least under BIPA, only the employers who implement a privacy violative ADS can be held liable—not the ADS vendors.

Constitutional Law

Procedural due process challenges to ADSs in employment decisions are promising and have already been successful. However, their scope is limited in two significant ways: they only apply (1) in the public sector, and (2) where there is “a protected property interest in continued employment.” Practically, this means that an applicant for a public-sector job cannot bring a due process challenge to an ADS used in their hiring decision—they have no property interest in a job they don’t currently have—but a public employee fired due to an ADS could. This is precisely what happened in Houston Federation of Teachers v. Houston Independent School District, where a public-school teacher union challenged the use of a proprietary ADS for employee sanction and promotion decisions. The court ruled that the teachers were deprived of procedural due process because the ADS vendor’s trade secrecy claim prevented the teachers from independently verifying or replicating the ADS outputs. The court concluded that, “[w]hen a public agency adopts a policy of making high stakes employment decisions based on secret algorithms incompatible with minimum due process, the proper remedy is to overturn the policy.”

Ultimately, the challenges to litigating algorithmic discrimination under antidiscrimination statutes, consumer protection, privacy, and constitutional law demonstrate the need for comprehensive regulation. But in the meantime, litigators should creatively use all of these legal theories to remedy algorithmic discrimination and to stymie the spread of ADSs in employment.

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.