Maddie Chang is a student at Harvard Law School.
In today’s Tech@Work, human workers behind Google’s AI chatbot call attention to poor labor conditions; and in a separate but related matter, workers who train OpenAI’s ChatGPT have filed a petition to investigate OpenAI for labor abuses with Kenya’s National Assembly.
Google’s chatbot “Bard” is a generative artificial intelligence (AI) tool that produces answers to people’s questions in a conversational format – seemingly without human intervention. But behind the scenes, human workers are involved in improving the chatbot’s answers by rating their helpfulness and by flagging offensive content. Google contracts with outside companies like Appen and Accenture to provide these services. As reported in Bloomberg this week, subcontracted workers are raising issues with working conditions and the nature of the tasks they are assigned. Bloomberg spoke with several workers who reported having unreasonably tight timeframes and improper training to rate the coherence of information in chatbot responses.
Workers reported rating chatbot answers that contain high stakes information, including dosage information for various medicines, and information about state laws. “Raters,” who are paid as little as $14 per hour, noted not having background information about the truth of the information presented, and not having enough time to check the information. The guidelines workers have received say: “You do not need to perform a rigorous fact check” when evaluating the helpfulness of answers, and that ratings should be “based on your current knowledge or quick web search.” As reported in the Bloomberg piece, Google said that: “Ratings are deliberately performed on a sliding scale to get more precise feedback to improve these models…such ratings don’t directly impact the output of our models and they are by no means the only way we promote accuracy.” The Alphabet Workers Union, which has organized Google workers and subcontracted workers at Appen and Accenture, condemned the way new AI related tasks have made conditions for workers more difficult.
In a separate but related matter, the Wall Street Journal podcast this week covered the exploitation of subcontracted workers in Kenya who train OpenAI’s chatbot ChatGPT. Earlier this month, a group of these workers filed a petition with Kenya’s National Assembly, asking the legislative body to investigate OpenAI and its subcontractor Samasource (Sama) for labor abuses. The WSJ reporter spoke with workers who are asked to review and flag disturbing or grotesque content for minimal pay (as low as $2-3 per hour) in ChatGPT responses. By flagging harmful content, workers have helped to ensure that chatbot responses do not contain pornographic or otherwise offensive content. But doing so involves viewing toxic content for long periods of time, which has traumatized workers and led to post-traumatic stress disorder and other conditions. As reported in Time earlier this year (and covered in News and Commentary), a group of content moderators sued the same subcontractor Sama in Nairobi, but for its work with toxic content on Meta’s platforms.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
November 9
University of California workers authorize the largest strike in UC history; growing numbers of legislators call for Boeing to negotiate with St. Louis machinists in good faith; and pilots and flight attendants at Spirit Airlines agree to salary reductions.
November 7
A challenge to a federal PLA requirement; a delayed hearing on collective bargaining; and the IRS announces relief from "no tax on tips" reporting requirements.
November 6
Starbucks workers authorize a strike; Sixth Circuit rejects Thryv remedies; OPEIU tries to intervene to defend the NLRB.
November 5
Denver Labor helps workers recover over $2.3 million in unpaid wages; the Eighth Circuit denies a request for an en ban hearing on Minnesota’s ban on captive audience meetings; and many top labor unions break from AFGE’s support for a Republican-backed government funding bill.
November 4
Second Circuit declines to revive musician’s defamation claims against former student; Trump administration adds new eligibility requirements for employers under the Public Service Loan Forgiveness program; major labor unions break with the AFGE's stance on the government shutdown.
November 3
Fifth Circuit rejects Thryv remedies, Third Circuit considers applying Ames to NJ statute, and some circuits relax McDonnell Douglas framework.