
Maddie Chang is a student at Harvard Law School.
In today’s Tech@Work, human workers behind Google’s AI chatbot call attention to poor labor conditions; and in a separate but related matter, workers who train OpenAI’s ChatGPT have filed a petition to investigate OpenAI for labor abuses with Kenya’s National Assembly.
Google’s chatbot “Bard” is a generative artificial intelligence (AI) tool that produces answers to people’s questions in a conversational format – seemingly without human intervention. But behind the scenes, human workers are involved in improving the chatbot’s answers by rating their helpfulness and by flagging offensive content. Google contracts with outside companies like Appen and Accenture to provide these services. As reported in Bloomberg this week, subcontracted workers are raising issues with working conditions and the nature of the tasks they are assigned. Bloomberg spoke with several workers who reported having unreasonably tight timeframes and improper training to rate the coherence of information in chatbot responses.
Workers reported rating chatbot answers that contain high stakes information, including dosage information for various medicines, and information about state laws. “Raters,” who are paid as little as $14 per hour, noted not having background information about the truth of the information presented, and not having enough time to check the information. The guidelines workers have received say: “You do not need to perform a rigorous fact check” when evaluating the helpfulness of answers, and that ratings should be “based on your current knowledge or quick web search.” As reported in the Bloomberg piece, Google said that: “Ratings are deliberately performed on a sliding scale to get more precise feedback to improve these models…such ratings don’t directly impact the output of our models and they are by no means the only way we promote accuracy.” The Alphabet Workers Union, which has organized Google workers and subcontracted workers at Appen and Accenture, condemned the way new AI related tasks have made conditions for workers more difficult.
In a separate but related matter, the Wall Street Journal podcast this week covered the exploitation of subcontracted workers in Kenya who train OpenAI’s chatbot ChatGPT. Earlier this month, a group of these workers filed a petition with Kenya’s National Assembly, asking the legislative body to investigate OpenAI and its subcontractor Samasource (Sama) for labor abuses. The WSJ reporter spoke with workers who are asked to review and flag disturbing or grotesque content for minimal pay (as low as $2-3 per hour) in ChatGPT responses. By flagging harmful content, workers have helped to ensure that chatbot responses do not contain pornographic or otherwise offensive content. But doing so involves viewing toxic content for long periods of time, which has traumatized workers and led to post-traumatic stress disorder and other conditions. As reported in Time earlier this year (and covered in News and Commentary), a group of content moderators sued the same subcontractor Sama in Nairobi, but for its work with toxic content on Meta’s platforms.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
September 17
A union argues the NLRB's quorum rule is unconstitutional; the California Building Trades back a state housing law; and Missouri proposes raising the bar for citizen ballot initiatives
September 16
In today’s news and commentary, the NLRB sues New York, a flight attendant sues United, and the Third Circuit considers the employment status of Uber drivers The NLRB sued New York to block a new law that would grant the state authority over private-sector labor disputes. As reported on recently by Finlay, the law, which […]
September 15
Unemployment claims rise; a federal court hands victory to government employees union; and employers fire workers over social media posts.
September 14
Workers at Boeing reject the company’s third contract proposal; NLRB Acting General Counsel William Cohen plans to sue New York over the state’s trigger bill; Air Canada flight attendants reject a tentative contract.
September 12
Zohran Mamdani calls on FIFA to end dynamic pricing for the World Cup; the San Francisco Office of Labor Standards Enforcement opens a probe into Scale AI’s labor practices; and union members organize immigration defense trainings.
September 11
California rideshare deal advances; Boeing reaches tentative agreement with union; FTC scrutinizes healthcare noncompetes.