News & Commentary

July 13, 2023

Maddie Chang

Maddie Chang is a student at Harvard Law School.

In today’s Tech@Work, human workers behind Google’s AI chatbot call attention to poor labor conditions; and in a separate but related matter, workers who train OpenAI’s ChatGPT have filed a petition to investigate OpenAI for labor abuses with Kenya’s National Assembly.

Google’s chatbot “Bard” is a generative artificial intelligence (AI) tool that produces answers to people’s questions in a conversational format – seemingly without human intervention. But behind the scenes, human workers are involved in improving the chatbot’s answers by rating their helpfulness and by flagging offensive content. Google contracts with outside companies like Appen and Accenture to provide these services. As reported in Bloomberg this week, subcontracted workers are raising issues with working conditions and the nature of the tasks they are assigned. Bloomberg spoke with several workers who reported having unreasonably tight timeframes and improper training to rate the coherence of information in chatbot responses. 

Workers reported rating chatbot answers that contain high stakes information, including dosage information for various medicines, and information about state laws. “Raters,” who are paid as little as $14 per hour, noted not having background information about the truth of the information presented, and not having enough time to check the information. The guidelines workers have received say: “You do not need to perform a rigorous fact check” when evaluating the helpfulness of answers, and that ratings should be “based on your current knowledge or quick web search.” As reported in the Bloomberg piece, Google said that: “Ratings are deliberately performed on a sliding scale to get more precise feedback to improve these models…such ratings don’t directly impact the output of our models and they are by no means the only way we promote accuracy.” The Alphabet Workers Union, which has organized Google workers and subcontracted workers at Appen and Accenture, condemned the way new AI related tasks have made conditions for workers more difficult. 

In a separate but related matter, the Wall Street Journal podcast this week covered the exploitation of subcontracted workers in Kenya who train OpenAI’s chatbot ChatGPT. Earlier this month, a group of these workers filed a petition with Kenya’s National Assembly, asking the legislative body to investigate OpenAI and its subcontractor Samasource (Sama) for labor abuses. The WSJ reporter spoke with workers who are asked to review and flag disturbing or grotesque content for minimal pay (as low as $2-3 per hour) in ChatGPT responses. By flagging harmful content, workers have helped to ensure that chatbot responses do not contain pornographic or otherwise offensive content. But doing so involves viewing toxic content for long periods of time, which has traumatized workers and led to post-traumatic stress disorder and other conditions. As reported in Time earlier this year (and covered in News and Commentary), a group of content moderators sued the same subcontractor Sama in Nairobi, but for its work with toxic content on Meta’s platforms.

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.