News & Commentary

May 4, 2023

Maddie Chang

Maddie Chang is a student at Harvard Law School.

In today’s Tech@Work, more than 150 workers establish a Content Moderators Union in Nairobi, the White House is looking into the use of automated technology to surveil workers, and WGA proposal on AI highlights writers’ desire to limit how AI is used in the workplace. 

On Monday, more than 150 content moderators gathered at a historic meeting in Nairobi to create a new union. Content moderators manually flag illegal or banned content across social media platforms like Facebook and TikTok. Those who gathered this week work for third party outsourcing firms, where the work of continually viewing harmful content can be traumatizing and can pay as little as $1.50 per hour. As reported in TIME, former Facebook content moderator Daniel Motaung started the effort to unionize back in 2019, and was then fired. He is currently suing Meta and the third party content moderation firm Sama in Nairobi court. In the suit, Motaung is part of a group of 43 Sama workers who allege that the firm engaged in “forced labor and human trafficking, unfair labor relations, union busting and failure to provide ‘adequate’ mental health and psychosocial support.” As commentators have previously noted, the organizing highlights that behind the seemingly automatic functioning of social media websites and AI chatbots is a group of unseen workers in precarious conditions. 

The White House announced a public request for information on the use of automated technology surveil, manage, and evaluate workers. The announcement notes the increase in worker surveillance and points to examples such as: warehouse workers who use scanners that clock how fast they work, nurses who wear badges that monitor movements, and office workers who use software that tracks their keystrokes. The announcement explains that this kind of technology may have benefits, but may also push workers to move too fast, deter worker organizing, and lead to differential treatment of workers. The White House Office of Science and Technology Policy and the Domestic Policy Counsel are looking for “ideas for how the federal government should respond to any relevant risks and opportunities.” As reported in Bloomberg, this call for information comes amidst state level efforts in Minnesota, California, and New York to regulate the use of worker surveillance technology. 

Finally, the Writers’ Guild of America (WGA) strike shines a light on how workers want to have a say in how artificial intelligence (AI) may be used in workplaces going forward. As Iman reported earlier this week, in its proposal to the collective bargaining representative of the major Hollywood studios, the WGA put forward a first-of-its-kind demand related to AI. As listed in the WGA strike announcement, the Guild wanted to ensure that AI will not not be used to write or rewrite literary material, that AI will not be used as source material, and that material written by Guild members will not be used to train AI models. The studios rejected the proposal and instead offered annual meetings to discuss advancements in technology. While some claims that AI threatens jobs overstate the current capacity of automated technology, the WGA proposals reflect writers’ interest in having some control over the way AI impacts their work. And as some have observed, the proposal also demonstrates the potential of labor negotiations as a mechanism to govern technology more broadly.

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.