discrimination

New York City Takes First Step Towards Regulating Automated Hiring

Nicholas Anway

Nicholas Anway is a student at Harvard Law School.

The next time you apply for a job, you may be scrutinized by an algorithm. Automated hiring software is already common. And its demand is accelerating as employers compete for employees in increasingly hot labor markets. According to Forbes, “99% of Fortune 500 companies rely on the aid of talent-sifting software, and 55% of human resource leaders in the U.S. use predictive algorithms to support hiring.”

Among other use cases, hiring algorithms quickly sort job applications, screen applicants in interviews and conduct social media background checks, writes Ifeoma Ajunwa, a leading scholar on the subject. Some firms purchase off-the-shelf hiring automation tools from third-party vendors to complete these tasks. HireVue is one example. The company offers software that screens applicants by scoring their “employability” using predictive algorithms, based on their facial expressions and speech in video interviews. Other firms develop automated hiring software on their own.

Many employers view automation as an opportunity to make hiring more efficient by using algorithms to match job openings with qualified candidates more quickly than, and with less input from, human HR teams. But “[f]or antidiscrimination law,” cautions Ajunwa, “the efficacy of any particular hiring system is a secondary concern to ensuring that any such system does not unlawfully discriminate against protected categories.”

As Professor Ajunwa suggests, automated hiring tools and the technologies that drive them have created discriminatory effects throughout the hiring process, including in sourcing, screening and interviewing. In 2018, Amazon infamously scrapped a secret recruiting algorithm that exhibited pervasive bias against women. Facial recognition products from leading tech companies like Microsoft and IBM have been found to perform better when interpreting lighter-skinned subjects than darker skinned subjects, to consistently interpret Black male faces as angrier than white male faces and to be largely pseudoscientific. Speech recognition models have demonstrated further biases against African Americans. And automated hiring has been shown to be particularly biased against workers with disabilities. Still, automated hiring software remains in high demand.

Against this backdrop, New York City took a significant step towards determining whether automated hiring systems unlawfully discriminate when it passed one of the most comprehensive laws regulating the use of automated hiring products in the country late last year. The law regulates “automated employment decision tools,” which it defines as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” It does so by introducing two new protections for job applicants and workers: annual bias audits and notification requirements.

A bias audit is defined by the statute as “an impartial evaluation by an independent auditor.” Employers and employment agencies that use “automated employment decision tools” are required to conduct and publish the results of an independent bias audit no more than one year prior to the tool’s use. At a minimum, bias audits must include testing “the tool’s disparate impact” based on gender, race, or national origin.

Bias audits and other methods of evaluating the impacts of automation have two main goals, writes UCLA law professor Andrew Selbst in a recent paper: “(1) to require firms to consider social impacts early [hiring discrimination, in this case] and work to mitigate them before development, and (2) to create documentation of decisions and testing.” Selbst argues that even if employers’ “institutional logics, such as liability avoidance and the profit motive” frustrate the first goal, “the second goal does not require full compliance to be successful.”

If Professor Selbst is right, New York City’s bias audits may provide fuel for litigation. That’s because litigation of anti-discrimination laws like Title VII turns in large part on an employer’s conduct. Because of the “black box” nature of many algorithms, it is notoriously difficult to move from algorithmic bias to employer liability without documentation from the employer’s teams that developed or purchased, tested, and implemented an automated tool. Documentation from a bias audit could help fill this gap by answering questions like: “Why was the hiring algorithm chosen?” “Were others tested?” “Were its discriminatory effects known to the employer?” “Is its use a business necessity?” And “were there less biased algorithms available?”

The New York City law also requires employers and employment agencies to provide notice to job candidates and employees if they will be screened or evaluated by automated hiring software. Job candidates and employees must be notified that: “an automated employment decision tool will be used in connection with the[ir] assessment or evaluation”; “[t]he job qualifications and characteristics” that the tool will use in their assessment or evaluation; and that the candidate may “request an alternative selection process or accommodation.” Finally, the law includes a data transparency provision, requiring employers to disclose the “type of data collected for the automated employment decision tool,” the data’s source(s) and the employer or employment agency’s data retention policy.

Several other state and local governments have begun to regulate automated hiring tools with legislation that echoes elements of New York City’s law. In 2019, Illinois became the first state to pass legislation regulating the use of automated evaluation software like HireVue’s in video interviews. The State’s Artificial Intelligence Video Interview Act (HB 2557) requires employers to notify interviewees that an automated Artificial Intelligence (“AI”) product will evaluate their facial and vocal expressions, to explain how the product works and to offer interviewees the opportunity to consent or opt out. In 2020, Maryland passed a similar statute, HB 1202, which requires employers to receive consent from applicants before using facial recognition in job interviews. And in California and Washington, D.C., bills have been introduced that would require automated hiring tools to be audited for discriminatory biases.

These regulations demonstrate progress towards more robust worker protections from automated hiring, but applicants and employees remain vulnerable to employers’ asymmetric power to hire and fire. The Illinois Act, for example, does not require companies to consider applicants who refuse to be analyzed by AI products, nor does it require employers or third-party vendors to audit their products for biases. Applicants may be inclined to opt out of automated evaluation because of the notice requirement. But doing so may simply opt them out of a job. And although New York City’s law was passed in response to advocacy from civil rights groups, many argue that it does not go far enough. The Center for Democracy and Technology, for example, has argued that the bill was significantly diluted prior to its passage, including by weakening its bias audit and notice requirements and by removing the rulemaking authority of the New York City Commission on Human Rights to clarify and expand on the bill’s provisions.

Despite its shortcomings, New York City’s law is the most significant U.S. legislation to date designed to prevent automated hiring discrimination. It’s enforcement provisions include fines of $500 for first-time violations and up to $1,500 for each subsequent violation. The regulations take effect on January 1, 2023.

Daily News & Commentary

Start your day with our roundup of the latest labor developments. See all

More From OnLabor

See more

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.