News & Commentary Tech@Work

April 18, 2024

Maddie Chang

Maddie Chang is a student at Harvard Law School.

In today’s Tech@Work, a regulation-of-algorithms-in-hiring blitz: Mass. AG issues advisory clarifying how state laws apply to AI decisionmaking tools; and British union TUC launches campaign for new law to regulate the use of AI at work.

This week, Massachusetts Attorney General Andrea Campbell issued an advisory that outlines how the state’s existing laws and regulations apply to new uses of artificial intelligence (AI), including AI used in hiring. The advisory begins by framing the problem and the stakes as such: “AI has been found to generate false information or results that are biased or discriminatory. These deficiencies and instances of poor quality are especially concerning when AI is used for processes that impact consumers’ livelihood, reputation, or economic well-being.” It goes on to note that AI decision making is subject to the state’s consumer, anti-discrimination, and data security laws, as well as the state’s enforcement of the (federal) Equal Credit Opportunity Act.

On the consumer law side, the guidance provides examples of what counts as an unfair or deceptive practice when it comes to AI. One potentially powerful interpretation is that “offering for sale or use an AI system that is not robust enough to perform appropriately in a real-world environment as compared to a testing environment is unfair and deceptive.” In theory, this type of deception/unfairness could include, for example, the use of AI hiring tools that perhaps did not exhibit disparate impact when tested on sample data, but did when used in real life hiring contexts.

British union Trade Union Congress (TUC) launched a campaign today for a new bill that would regulate the use of AI at work, as it affects both job seekers and workers. The TUC is an umbrella organization of 48 union affiliates that represents 5.5 million individual members in the UK. The proposal seeks to regulate multiple stages of the AI adoption process in workplaces. At the stage where a workplace would consider adopting an AI tool, employers would need to conduct a Workplace AI Risk Assessments’ (WAIRA) to assess the risks of a tool, which would involve extensive consultation with workers. Separately, job seekers would be entitled to personalized explanations of AI hiring decisions and other high-stakes decisions, as well as reconsiderations on a human rights basis.

Additionally, the TUC proposes an outright ban on the use of emotion recognition tools, many of which are considered pseudo-scientific. This proposed bill represents a sector-based way to regulate AI, which stands in contrast to the EU’s cross-sector, technology-centric approach as exhibited in the EU AI Act. Where the US will end up is to be determined. But in the meantime, cities are starting to experiment with use-case-specific regulations, such as New York City’s law requiring audits for AI hiring tools (bonus news item: see a new paper examining its efficacy here!).

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.