Sophia is a student at Harvard Law School and a member of the Labor and Employment Lab.
In today’s news and commentary, Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.
Today, the U.S. Court of Appeals for the Second Circuit will hear oral arguments over Starbucks Corporation’s bid to overturn a National Labor Relations Board ruling that the company’s dress code at the New York Roastery violated its workers’ rights under federal labor law. The case gives the Second Circuit an opportunity to consider the Board’s current standard—Tesla, Inc. (2022)—for evaluating challenges to workplace dress codes. In that decision, the Board held that Tesla illegally prohibited workers from wearing pro-union shirts, and ordered the company to modify its dress code to allow for such shirts. Last year, the Board applied Tesla to hold that Starbucks illegally restricted baristas from wearing shirts with union insignia on them and from wearing more than one pin advocating for union organizing or other personal, political, or religious issues. This case marks the eighth time in two and a half years that Starbucks and the Board have faced off in federal appeals court.
Last week, four wrongful death lawsuits were filed against OpenAI, accusing the company’s chatbot of contributing to psychiatric breakdowns. Filed in California state courts, the cases claim that ChatGPT exacerbated users’ isolation and depression, ultimately leading to their suicides. OpenAI announced last month that they were collaborating with over 170 mental health experts to make ChatGPT more attuned to users expressing thoughts of self-harm. If the chatbot detects suicidal ideation, it is supposed to direct users to real-world resources such as crisis hotlines. However, mental health professionals continue to raise concerns over the feasibility of artificial intelligence as a legitimate source of therapy—a recent study conducted by computer science and psychiatry researchers at Brown University found that AI chatbots “routinely violate core mental health ethics standards” established by the American Psychological Association. As people increasingly turn to AI as a source of mental health support, policymakers should ensure that the voices of mental healthcare workers—such as psychiatrists, psychological, therapists, and social workers—are heard and given sufficient weight in designing regulations to maximize human safety and well-being.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
January 22
Hyundai’s labor union warns against the introduction of humanoid robots; Oregon and California trades unions take different paths to advocate for union jobs.
January 20
In today’s news and commentary, SEIU advocates for a wealth tax, the DOL gets a budget increase, and the NLRB struggles with its workforce. The SEIU United Healthcare Workers West is advancing a California ballot initiative to impose a one-time 5% tax on personal wealth above $1 billion, aiming to raise funds for the state’s […]
January 19
Department of Education pauses wage garnishment; Valero Energy announces layoffs; Labor Department wins back wages for healthcare workers.
January 18
Met Museum workers unionize; a new report reveals a $0.76 average tip for gig workers in NYC; and U.S. workers receive the smallest share of capital since 1947.
January 16
The NLRB publishes its first decision since regaining a quorum; Minneapolis labor unions call for a general strike in response to the ICE killing of Renee Good; federal workers rally in DC to show support for the Protecting America’s Workforce Act.
January 15
New investigation into the Secretary of Labor; New Jersey bill to protect child content creators; NIOSH reinstates hundreds of employees.