News & Commentary

November 12, 2025

Sophia Leswing

Sophia is a student at Harvard Law School and a member of the Labor and Employment Lab.

In today’s news and commentary, Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.

Today, the U.S. Court of Appeals for the Second Circuit will hear oral arguments over Starbucks Corporation’s bid to overturn a National Labor Relations Board ruling that the company’s dress code at the New York Roastery violated its workers’ rights under federal labor law. The case gives the Second Circuit an opportunity to consider the Board’s current standard—Tesla, Inc. (2022)—for evaluating challenges to workplace dress codes. In that decision, the Board held that Tesla illegally prohibited workers from wearing pro-union shirts, and ordered the company to modify its dress code to allow for such shirts. Last year, the Board applied Tesla to hold that Starbucks illegally restricted baristas from wearing shirts with union insignia on them and from wearing more than one pin advocating for union organizing or other personal, political, or religious issues. This case marks the eighth time in two and a half years that Starbucks and the Board have faced off in federal appeals court.

Last week, four wrongful death lawsuits were filed against OpenAI, accusing the company’s chatbot of contributing to psychiatric breakdowns. Filed in California state courts, the cases claim that ChatGPT exacerbated users’ isolation and depression, ultimately leading to their suicides. OpenAI announced last month that they were collaborating with over 170 mental health experts to make ChatGPT more attuned to users expressing thoughts of self-harm. If the chatbot detects suicidal ideation, it is supposed to direct users to real-world resources such as crisis hotlines. However, mental health professionals continue to raise concerns over the feasibility of artificial intelligence as a legitimate source of therapy—a recent study conducted by computer science and psychiatry researchers at Brown University found that AI chatbots “routinely violate core mental health ethics standards” established by the American Psychological Association. As people increasingly turn to AI as a source of mental health support, policymakers should ensure that the voices of mental healthcare workers—such as psychiatrists, psychological, therapists, and social workers—are heard and given sufficient weight in designing regulations to maximize human safety and well-being.

More From OnLabor

See more

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.