Sophia is a student at Harvard Law School and a member of the Labor and Employment Lab.
In today’s news and commentary, Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.
Today, the U.S. Court of Appeals for the Second Circuit will hear oral arguments over Starbucks Corporation’s bid to overturn a National Labor Relations Board ruling that the company’s dress code at the New York Roastery violated its workers’ rights under federal labor law. The case gives the Second Circuit an opportunity to consider the Board’s current standard—Tesla, Inc. (2022)—for evaluating challenges to workplace dress codes. In that decision, the Board held that Tesla illegally prohibited workers from wearing pro-union shirts, and ordered the company to modify its dress code to allow for such shirts. Last year, the Board applied Tesla to hold that Starbucks illegally restricted baristas from wearing shirts with union insignia on them and from wearing more than one pin advocating for union organizing or other personal, political, or religious issues. This case marks the eighth time in two and a half years that Starbucks and the Board have faced off in federal appeals court.
Last week, four wrongful death lawsuits were filed against OpenAI, accusing the company’s chatbot of contributing to psychiatric breakdowns. Filed in California state courts, the cases claim that ChatGPT exacerbated users’ isolation and depression, ultimately leading to their suicides. OpenAI announced last month that they were collaborating with over 170 mental health experts to make ChatGPT more attuned to users expressing thoughts of self-harm. If the chatbot detects suicidal ideation, it is supposed to direct users to real-world resources such as crisis hotlines. However, mental health professionals continue to raise concerns over the feasibility of artificial intelligence as a legitimate source of therapy—a recent study conducted by computer science and psychiatry researchers at Brown University found that AI chatbots “routinely violate core mental health ethics standards” established by the American Psychological Association. As people increasingly turn to AI as a source of mental health support, policymakers should ensure that the voices of mental healthcare workers—such as psychiatrists, psychological, therapists, and social workers—are heard and given sufficient weight in designing regulations to maximize human safety and well-being.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
November 12
Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.
November 11
A proposed federal labor law overhaul, SCOTUS declines to undo a $22 million FLSA verdict, and a railroad worker’s ADA claim goes to jury trial.
November 10
Meta unveils data center ads; partisan government emails blocked by judge; thousands protest in Portugal.
November 9
University of California workers authorize the largest strike in UC history; growing numbers of legislators call for Boeing to negotiate with St. Louis machinists in good faith; and pilots and flight attendants at Spirit Airlines agree to salary reductions.
November 7
A challenge to a federal PLA requirement; a delayed hearing on collective bargaining; and the IRS announces relief from "no tax on tips" reporting requirements.
November 6
Starbucks workers authorize a strike; Sixth Circuit rejects Thryv remedies; OPEIU tries to intervene to defend the NLRB.