technology

Preventing a Dystopian Work Environment: AI Regulation and Transparency in At-Will Employment

Dallas Estes

Dallas Estes is a student at Harvard Law School and a member of the Labor and Employment Lab.

We know that artificial intelligence might take our jobs – but what if it replaces our bosses? AI took up a prominent place in our national consciousness this past year as commentators observed with dismay and wonder that ChatGPT and other machine learning systems were passing the bar exam, writing college essays, and creating award-winning art. And AI has proven no exception to longstanding concerns that automation will take our jobs. But unlike automation of decades past, AI threatens to replace not just workers, but decisionmakers. Perhaps that is what elevates some stories of AI at work – the sense that it is worse, even intolerable, to have certain managerial tasks performed by something that is not human.

This sense is captured in legislative proposals on AI that have expanded across the United States in recent years. This post will highlight several legislative efforts to regulate AI in the workplace. While many focus on algorithmic bias, some get at something more – a sense of disquiet that AI will watch, use, or control workers without their consent. And solutions to these novel issues may well have an unintended, but welcome, effect: a softening of at-will employment’s presumption against transparency in the workplace.

Algorithmic bias and opacity

In a 2022 survey, 79% of employers indicated that they use AI in the hiring and recruitment process. While some believe that AI could mitigate discrimination in hiring, others are wary that algorithms will replicate human biases under the auspices of machine-like neutrality. For example, in 2016, Amazon discovered that its resume screening tool preferred men to women because the company had received resumes from mostly men in the past. That tool didn’t rely expressly on gender as a data point, but instead on women’s colleges and clubs listed on resumes. This event illustrates two fundamental challenges of automated decision-making systems (ADSs): it’s often not clear what characteristics an algorithm has relied on to reach its results, and even systems designed to avoid reliance on protected characteristics can find patterns and create proxies for those characteristics without our knowledge.

Given this algorithmic opacity, and the potential for proxies to emerge from any number of mundane details, how can legislatures bar AI’s reliance on protected characteristics? Some enactments and proposals address this transparency gap by requiring visibility into effects. For example, a bill proposed in New York would require annual impact assessments of hiring ADSs for disparate impact on the basis of “sex, race, ethnicity, or other protected class” characteristics. A recent New York City law imposes similar requirements for promotion decisions in addition to hiring. And the Illinois Artificial Intelligence Video Interview Act, which requires employers to obtain consent from job applicants whose videos will be analyzed by AI, requires employers to submit demographic data to the state on individuals not afforded an in-person interview following AI review.

Beyond bias: regulating AI in the workplace

But algorithmic bias is not the only focus of recent bills. Several propose restrictions on the surveillance of workers and how employee data may be used. Illinois’ Artificial Intelligence Video Act did not initially require reporting of demographic data, but focused primarily on disclosure to and consent from applicants whose videos would be analyzed by AI. Such notice and consent requirements feature prominently in proposals for AI regulation both within and outside the workplace. This focus on consent reflects the fact that the very use of AI introduces a privacy or perhaps a dignitary harm that is particular to AI, given that the law is often agnostic to humans performing the same tasks.

Massachusetts’ H.B. 1873 – An Act Preventing a Dystopian Work Environment – is one recent legislative attempt to address the dignitary and other harms AI poses to workers. The bill, which is in committee following its introduction in the House by Representative Dylan A. Fernandes, is not limited to hiring or promotions. It defines “employment-related decision” as “any decision” that “affects wages, benefits, other compensation, hours, work schedule, performance evaluation, hiring, discipline, promotion, termination, job content, assignment of work, access to work opportunities, productivity requirements, workplace health and safety, and other terms or conditions of employment.” In addition to algorithmic impact assessments, the bill proposes restrictions on employers’ use of “worker data”; limitations on electronic monitoring of employees, which must be “the least invasive” means to accomplish an enumerated purpose; and a requirement that AI “productivity systems” not “result in physical or mental harm to workers.”

The bill outlines two procedural requirements worth detailing here. First, the Act would give employees a right to dispute the accuracy of their “worker data,” and employers would be obligated to investigate and to correct inaccurate data. Second, if an employer relied on an ADS to make a “hiring, promotion, termination, or disciplinary decision,” the Act would require the employer to: (1) “corroborate” the ADS by “other means,” such as “managerial documentation, personnel files, or the consultation of coworkers” and (2) provide notice to affected workers of the “specific decision” for which an ADS was used; what “worker data” the ADS relied on; “information or judgments” used to corroborate the ADS; and “[n]otice of the worker’s right to dispute” an employer’s ADS impact assessment.

These provisions would have the effect of requiring employers to provide more information to employees than they are required to disclose today. In practice, a termination under H.B. 1873 might work like this: an employer who terminates an employee based on an ADS output must tell the employee not only that they are being fired, but also what worker data the ADS used, and what additional information the employer considered to corroborate the ADS output. That worker would thus gain visibility into potential bases for their termination, and they would have a right to dispute specific worker data on which the employer or ADS relied. The employer would be required not only to investigate, but to “review and adjust” any employment-related decisions based “solely or partially” on inaccurate data – suggesting that an employee could win back their job.

These effects are significant in an at-will system where employees can be fired for good cause, bad cause, or no cause at all. An at-will employee would gain not just the right to know why they are being terminated, but an opportunity to show that that “why” is incorrect – say, a geolocator reported you were late to work, when you were not. And an employer would be obligated to yield to that truth.

While Massachusetts’ H.B. 1873 is just one example of what AI regulation could look like in coming years, its potential impacts may be a harbinger of things to come. Many legislators have responded to the challenge of algorithmic opacity by requiring employers to disclose employment decisions. As AI infiltrates more workplaces, regulators may rely on affected workers to bring violations of AI regulations to their attention – which would require shifting disclosures to workers themselves. Thus, an attempt to address the disquiet of robot bosses could have the unintended effect of increasing transparency in the employer-employee relationship – and potentially mitigate some of the harms of the at-will regime in stride.

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.