Maddie Chang is a student at Harvard Law School.
In today’s Tech@Work, a regulation-of-algorithms-in-hiring blitz: Mass. AG issues advisory clarifying how state laws apply to AI decisionmaking tools; and British union TUC launches campaign for new law to regulate the use of AI at work.
This week, Massachusetts Attorney General Andrea Campbell issued an advisory that outlines how the state’s existing laws and regulations apply to new uses of artificial intelligence (AI), including AI used in hiring. The advisory begins by framing the problem and the stakes as such: “AI has been found to generate false information or results that are biased or discriminatory. These deficiencies and instances of poor quality are especially concerning when AI is used for processes that impact consumers’ livelihood, reputation, or economic well-being.” It goes on to note that AI decision making is subject to the state’s consumer, anti-discrimination, and data security laws, as well as the state’s enforcement of the (federal) Equal Credit Opportunity Act.
On the consumer law side, the guidance provides examples of what counts as an unfair or deceptive practice when it comes to AI. One potentially powerful interpretation is that “offering for sale or use an AI system that is not robust enough to perform appropriately in a real-world environment as compared to a testing environment is unfair and deceptive.” In theory, this type of deception/unfairness could include, for example, the use of AI hiring tools that perhaps did not exhibit disparate impact when tested on sample data, but did when used in real life hiring contexts.
British union Trade Union Congress (TUC) launched a campaign today for a new bill that would regulate the use of AI at work, as it affects both job seekers and workers. The TUC is an umbrella organization of 48 union affiliates that represents 5.5 million individual members in the UK. The proposal seeks to regulate multiple stages of the AI adoption process in workplaces. At the stage where a workplace would consider adopting an AI tool, employers would need to conduct a Workplace AI Risk Assessments’ (WAIRA) to assess the risks of a tool, which would involve extensive consultation with workers. Separately, job seekers would be entitled to personalized explanations of AI hiring decisions and other high-stakes decisions, as well as reconsiderations on a human rights basis.
Additionally, the TUC proposes an outright ban on the use of emotion recognition tools, many of which are considered pseudo-scientific. This proposed bill represents a sector-based way to regulate AI, which stands in contrast to the EU’s cross-sector, technology-centric approach as exhibited in the EU AI Act. Where the US will end up is to be determined. But in the meantime, cities are starting to experiment with use-case-specific regulations, such as New York City’s law requiring audits for AI hiring tools (bonus news item: see a new paper examining its efficacy here!).
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
March 27
“Cesar Chavez Day” renamed “Farmworkers Day” in California after investigation finds Chavez engaged in rampant sexual abuse.
March 26
Supreme Court hears oral argument in an FAA case; NLRB rules that Cemex does not impose an enforceable deadline for requesting an election; DOL proposes raising wage standards for H-1B workers.
March 25
UPS rescinded its driver buyout program; California court dismissed a whistleblower retaliation suit against Meta; EEOC announced $15 million settlement to resolve vaccine-related religious discrimination case.
March 24
The WNBPA unanimously votes to ratify the league’s new CBA; NYU professors begin striking; and a district court judge denies the government’s motion to dismiss a case challenging the Trump administration’s mass revocation of international student visas.
March 23
MSPB finds immigration judges removal protections unconstitutional, ICE deployed to airports.
March 22
Resurgence in salting among young activists; Michigan nurses strike; states experiment with policies supporting workers experiencing menopause.