Sophia is a student at Harvard Law School and a member of the Labor and Employment Lab.
In today’s news and commentary, Starbucks and the NLRB face off over a dress code dispute, and mental healthcare workers face a reckoning with AI.
Today, the U.S. Court of Appeals for the Second Circuit will hear oral arguments over Starbucks Corporation’s bid to overturn a National Labor Relations Board ruling that the company’s dress code at the New York Roastery violated its workers’ rights under federal labor law. The case gives the Second Circuit an opportunity to consider the Board’s current standard—Tesla, Inc. (2022)—for evaluating challenges to workplace dress codes. In that decision, the Board held that Tesla illegally prohibited workers from wearing pro-union shirts, and ordered the company to modify its dress code to allow for such shirts. Last year, the Board applied Tesla to hold that Starbucks illegally restricted baristas from wearing shirts with union insignia on them and from wearing more than one pin advocating for union organizing or other personal, political, or religious issues. This case marks the eighth time in two and a half years that Starbucks and the Board have faced off in federal appeals court.
Last week, four wrongful death lawsuits were filed against OpenAI, accusing the company’s chatbot of contributing to psychiatric breakdowns. Filed in California state courts, the cases claim that ChatGPT exacerbated users’ isolation and depression, ultimately leading to their suicides. OpenAI announced last month that they were collaborating with over 170 mental health experts to make ChatGPT more attuned to users expressing thoughts of self-harm. If the chatbot detects suicidal ideation, it is supposed to direct users to real-world resources such as crisis hotlines. However, mental health professionals continue to raise concerns over the feasibility of artificial intelligence as a legitimate source of therapy—a recent study conducted by computer science and psychiatry researchers at Brown University found that AI chatbots “routinely violate core mental health ethics standards” established by the American Psychological Association. As people increasingly turn to AI as a source of mental health support, policymakers should ensure that the voices of mental healthcare workers—such as psychiatrists, psychological, therapists, and social workers—are heard and given sufficient weight in designing regulations to maximize human safety and well-being.
Daily News & Commentary
Start your day with our roundup of the latest labor developments. See all
December 8
Private payrolls fall; NYC Council overrides mayoral veto on pay data; workers sue Starbucks.
December 7
Philadelphia transit workers indicate that a strike is imminent; a federal judge temporarily blocks State Department layoffs; and Virginia lawmakers consider legislation to repeal the state’s “right to work” law.
December 5
Netflix set to acquire Warner Bros., Gen Z men are the most pro-union generation in history, and lawmakers introduce the “No Robot Bosses Act.”
December 4
Unionized journalists win arbitration concerning AI, Starbucks challenges two NLRB rulings in the Fifth Circuit, and Philadelphia transit workers resume contract negotiations.
December 3
The Trump administration seeks to appeal a federal judge’s order that protects the CBAs of employees within the federal workforce; the U.S. Department of Labor launches an initiative to investigate violations of the H-1B visa program; and a union files a petition to form a bargaining unit for employees at the Met.
December 2
Fourth Circuit rejects broad reading of NLRA’s managerial exception; OPM cancels reduced tuition program for federal employees; Starbucks will pay $39 million for violating New York City’s Fair Workweek law; Mamdani and Sanders join striking baristas outside a Brooklyn Starbucks.