OnLabor is pleased to introduce Tech@Work, a News & Commentary feature that will cover the latest technological trends impacting workers and their unions. The subject is vast, encompassing developments in artificial intelligence, engineering, and bioinformatics and their impacts on all aspects of work life, including surveillance, monitoring, hiring practices, organizing, and bargaining. We hope you enjoy our coverage.
Senator Bob Casey Urged Biden Administration to Investigate and Regulate Worker-Monitoring Technology
In a letter to the US Labor Secretary, Senator Bob Casey (D-PA) urged the Biden Administration in August to investigate corporate use of technology to monitor and control workers. Casey cited the novel technologies that companies now use to “track, monitor, manage and discipline workers” as a threat to employees’ power and autonomy within the workplace. He called on the Administration to implement legal and regulatory restrictions on employers’ surveillance of their workers.
The letter referenced a report from 2021 that had recounted the experiences of Amazon delivery drivers who had alleged that Amazon’s performance-tracking technology inappropriately penalized or terminated the drivers without accounting for the real-life causes of the drivers’ slower deliveries, such as traffic jams, locked apartment buildings, and more. Moreover, Casey’s letter cited a New York Times investigation that had revealed the wide proliferation of workplace productivity monitoring technology through American companies.
Going forward, Casey noted that the Labor Department ought to evaluate the systems and risks associated with biodata-capturing devices — including those that perform facial and emotional recognition, biometric monitoring, and employee performance analysis — and to develop a suite of laws that protect workers from the negative effects of such technologies.
Insurer Not Obligated to Defend Senior Living Facility in Former Employee’s Biometrics Suit
In August, the US District Court for the Northern District of Illinois ruled that insurance firm Church Mutual would not be obligated to defend its policyholder, Prairie Village Supportive Living, in a class action suit. The lawsuit, filed against an Illinois-based subsidiary of Prairie Village known as Eagle’s View Supportive Living and Memory Care, was brought by former employees of the nursing home in 2021. The complaint alleges that the senior living facility unlawfully collected, utilized, and shared employee biometric data in violation of the Illinois Biometric Information Privacy Act. BIPA — a landmark piece of legislation passed in Illinois in 2008 — regulates the gathering, use, and storage of biometric information such as fingerprints, facial scans, and more.
In the complaint, former Eagle’s View employees asserted that the facility had mandated the use of fingerprint scanners at the beginning and end of each workday, without providing a waiver or written release to collect employee biodata. Additionally, the complaint alleged that Eagle’s View had failed to provide details on its process for purging employee biodata once it was collected.
Church Mutual, Prairie Village’s insurance provider, argued in July 2021 that its insurance policies do not cover “wrongful employment practices,” such as invasion of privacy. Prairie Village — which had Church Mutual–provided coverage for both employment practices and general liability — rebutted by arguing that antidiscrimination laws, such as the Pregnancy Discrimination Act, are not excluded from insurance-policy coverage, and therefore by extension, BIPA-related cases should not be excluded either.
However, in its summary judgment opinion, the district court stated that BIPA imposes various responsibilities on employers in the treatment of biodata but has nothing to do with discrimination. So the court granted summary judgment for Church Mutual, holding that it does not owe a duty to defend or indemnify Prairie Village under the existing policies.
New York City AI Law to Take Effect on Hiring Practices in January 2023
Starting in January 2023, employers in New York City will have to adhere to a groundbreaking set of laws regulating the use of artificial intelligence in hiring and recruiting processes. The first-of-its-kind piece of legislation will require employers to (a) limit their implementation of artificial-intelligence and machine-learning tools, particularly if such tools are replacing human decisions regarding prospective employees and (b) conduct and publish an annual “bias audit” to increase transparency into companies’ hiring tools and metrics.
The ordinance, which was passed last year, has come under renewed focus as AI tools — ranging from algorithms designed to parse candidate’s biographical data to software that analyzes interviewee’s body language and voice — have been found to perpetuate bias in hiring. For example, unchecked hiring algorithms that incorporate area code as a parameter have been found to create racial disparities as they filter candidate pools.
Under the new legislation, employers will be required to audit their hiring technology for bias based on race, ethnicity, sex, and more. Employers found in noncompliance with the requirements of the ordinance could face a $500 fine for their first offense and a $1,500 fine for each subsequent violation. Although states like Maryland and Illinois have implemented regulations pertaining to more limited applications of artificial intelligence, the New York City ordinance is the first in the country to impose specific requirements on employers in their use of decision-making tools.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all