Maddie Chang is a student at Harvard Law School.
In today’s Tech@Work, a regulation-of-algorithms-in-hiring blitz: Mass. AG issues advisory clarifying how state laws apply to AI decisionmaking tools; and British union TUC launches campaign for new law to regulate the use of AI at work.
This week, Massachusetts Attorney General Andrea Campbell issued an advisory that outlines how the state’s existing laws and regulations apply to new uses of artificial intelligence (AI), including AI used in hiring. The advisory begins by framing the problem and the stakes as such: “AI has been found to generate false information or results that are biased or discriminatory. These deficiencies and instances of poor quality are especially concerning when AI is used for processes that impact consumers’ livelihood, reputation, or economic well-being.” It goes on to note that AI decision making is subject to the state’s consumer, anti-discrimination, and data security laws, as well as the state’s enforcement of the (federal) Equal Credit Opportunity Act.
On the consumer law side, the guidance provides examples of what counts as an unfair or deceptive practice when it comes to AI. One potentially powerful interpretation is that “offering for sale or use an AI system that is not robust enough to perform appropriately in a real-world environment as compared to a testing environment is unfair and deceptive.” In theory, this type of deception/unfairness could include, for example, the use of AI hiring tools that perhaps did not exhibit disparate impact when tested on sample data, but did when used in real life hiring contexts.
British union Trade Union Congress (TUC) launched a campaign today for a new bill that would regulate the use of AI at work, as it affects both job seekers and workers. The TUC is an umbrella organization of 48 union affiliates that represents 5.5 million individual members in the UK. The proposal seeks to regulate multiple stages of the AI adoption process in workplaces. At the stage where a workplace would consider adopting an AI tool, employers would need to conduct a Workplace AI Risk Assessments’ (WAIRA) to assess the risks of a tool, which would involve extensive consultation with workers. Separately, job seekers would be entitled to personalized explanations of AI hiring decisions and other high-stakes decisions, as well as reconsiderations on a human rights basis.
Additionally, the TUC proposes an outright ban on the use of emotion recognition tools, many of which are considered pseudo-scientific. This proposed bill represents a sector-based way to regulate AI, which stands in contrast to the EU’s cross-sector, technology-centric approach as exhibited in the EU AI Act. Where the US will end up is to be determined. But in the meantime, cities are starting to experiment with use-case-specific regulations, such as New York City’s law requiring audits for AI hiring tools (bonus news item: see a new paper examining its efficacy here!).
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
March 6
The Harvard Graduate Students Union announces a strike authorization vote.
March 5
Colorado judge grants AFSCME’s motion to intervene to defend Colorado’s county employee collective bargaining law; Arizona proposes constitutional amendment to ban teachers unions’ use public resources; NLRB unlikely to use rulemaking to overturn precedent.
March 4
The NLRB and Ex-Cell-O; top aides to Labor Secretary resign; attacks on the Federal Mediation and Conciliation Service
March 3
Texas dismantles contracting program for minorities; NextEra settles ERISA lawsuit; Chipotle beats an age discrimination suit.
March 2
Block lays off over 4,000 workers; H-1B fee data is revealed.
March 1
The NLRB officially rescinds the Biden-era standard for determining joint-employer status; the DOL proposes a rule that would rescind the Biden-era standard for determining independent contractor status; and Walmart pays $100 million for deceiving delivery drivers regarding wages and tips.