Sophia is a student at Harvard Law School and a member of the Labor and Employment Lab.
In today’s news and commentary, Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.
Today, the U.S. Court of Appeals for the Second Circuit will hear oral arguments over Starbucks Corporation’s bid to overturn a National Labor Relations Board ruling that the company’s dress code at the New York Roastery violated its workers’ rights under federal labor law. The case gives the Second Circuit an opportunity to consider the Board’s current standard—Tesla, Inc. (2022)—for evaluating challenges to workplace dress codes. In that decision, the Board held that Tesla illegally prohibited workers from wearing pro-union shirts, and ordered the company to modify its dress code to allow for such shirts. Last year, the Board applied Tesla to hold that Starbucks illegally restricted baristas from wearing shirts with union insignia on them and from wearing more than one pin advocating for union organizing or other personal, political, or religious issues. This case marks the eighth time in two and a half years that Starbucks and the Board have faced off in federal appeals court.
Last week, four wrongful death lawsuits were filed against OpenAI, accusing the company’s chatbot of contributing to psychiatric breakdowns. Filed in California state courts, the cases claim that ChatGPT exacerbated users’ isolation and depression, ultimately leading to their suicides. OpenAI announced last month that they were collaborating with over 170 mental health experts to make ChatGPT more attuned to users expressing thoughts of self-harm. If the chatbot detects suicidal ideation, it is supposed to direct users to real-world resources such as crisis hotlines. However, mental health professionals continue to raise concerns over the feasibility of artificial intelligence as a legitimate source of therapy—a recent study conducted by computer science and psychiatry researchers at Brown University found that AI chatbots “routinely violate core mental health ethics standards” established by the American Psychological Association. As people increasingly turn to AI as a source of mental health support, policymakers should ensure that the voices of mental healthcare workers—such as psychiatrists, psychological, therapists, and social workers—are heard and given sufficient weight in designing regulations to maximize human safety and well-being.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
March 27
“Cesar Chavez Day” renamed “Farmworkers Day” in California after investigation finds Chavez engaged in rampant sexual abuse.
March 26
Supreme Court hears oral argument in an FAA case; NLRB rules that Cemex does not impose an enforceable deadline for requesting an election; DOL proposes raising wage standards for H-1B workers.
March 25
UPS rescinded its driver buyout program; California court dismissed a whistleblower retaliation suit against Meta; EEOC announced $15 million settlement to resolve vaccine-related religious discrimination case.
March 24
The WNBPA unanimously votes to ratify the league’s new CBA; NYU professors begin striking; and a district court judge denies the government’s motion to dismiss a case challenging the Trump administration’s mass revocation of international student visas.
March 23
MSPB finds immigration judges removal protections unconstitutional, ICE deployed to airports.
March 22
Resurgence in salting among young activists; Michigan nurses strike; states experiment with policies supporting workers experiencing menopause.