Amazon

When Algorithms Fire Humans

Tiran Bajgiran

Tiran Bajgiran is a student at Harvard Law School.

Poor selfies, rainy days, and locked gates ­– all are reported triggers for the auto-firing of delivery personnel by Amazon’s Flex app, and all follow events beyond the workers’ control. Meanwhile, no appeal to humans is available on the platform, nor any negotiation with the opaque algorithm calculating whether gig drivers have outlived their usefulness to the company.

Bots terminating employment, over the past decade, have subtly evolved from sci fi to the lived everyday of America’s platform workforce. Spearheading this algorithmic management crusade is Amazon, whose dedication to taking the “human” out of “human resources” and promoting high employee turnover grows by the year. Consider the experience of this driver:

The email I woke up to this morning stated that my account had been terminated because “the images you provided did not meet the requirements for the Amazon Flex program.” […] I’m going to appeal it but it seems like the success rate in appeals is very low, and I consistently see people who appeal saying they are often only talking to bots – so how do you reason with a bot? And what images could they be talking about? The selfies we take every 3-4 days?

The selfies in question reflect a fraction of Amazon’s broader surveillance and datafication of gig workers. Flex drivers are algorithmically monitored and evaluated for “efficiency” upon opening the app – with metrics ranging from wearing seat belts to acceleration, reversing, braking, cornering, response to customer requests, or how much they touch their screen while in motion. Since 2019, the company has also required drivers to routinely upload pictures of themselves in order to “combat fraud”, that is, to deter multiple workers from sharing an account on the Flex platform. Meanwhile, drivers’ forums are full of posts complaining of arbitrary terminations and facial recognition glitches – including from workers who lost weight, shaved their beards, got a haircut, or simply took a picture with poor lighting.

Terminated drivers have then ten days to petition an appeal – to a bot – during which time they are disqualified from working. Should the driver lose, as is the norm, they can request arbitration, with a typical filing fee of $200 – an unrealistic prospect for workers barely making minimum wage. Consider the illustrative experience of Stephane Normandin contesting the opacity of his termination, in vain:

THIS HAS TO BE A MISTAKE ! I depend on this to survive. Just what standards have I not met? This email is not specific… I have a consistent rating of always getting everything delivered, I have never missed a block I always show up on time (early) and I’ve never canceled late this just doesn’t make any sense.

And yet for the e-commerce behemoth, growing revelations of botched automated terminations do not seem to have tempered interest in its Flex app, which registered 300,000 downloads last November alone. That the app is rife with arbitrary firings, punishing workers for events outside their control like traffic blocks or incorrect delivery directions, has not convinced Amazon to reconsider its management.

Robophobia aside, what are the distinctive harms of algorithmic firing, and how do they differ from the harms involved in other forms of at-will discharge? Here are two points to consider.

  1. Mental Health Degradation

Flex drivers only see four categories of rating on the app: Fantastic, Great, Fair, or At Risk. The rating is meant to reflect their productivity and is used to threaten workers with redundancy if they fail to maintain a certain level. A single late delivery can dramatically affect the rating, with some workers explaining that it can take months to recover from even an unavoidable delay.

We know from media reports that Flex drivers are assessed algorithmically on a range of unclear variables, including on-time performance, whether the delivered package is sufficiently hidden from the street, or the extent to which they accommodate customer requests. But drivers do not know the exact metrics used to evaluate their rating, which changes constantly, nor how it is calculated; in fact neither does Jeff Bezos – only the algorithm knows, analyzing each worker’s “efficiency” live and autonomously. The opacity of this technology is not conducive to workers’ mental health.

When workers are uncertain about why they might be fired, they are prone to increased anxiety: without the ability to rationally understand a rating, they seek patterns to explain and recreate the conditions that once got them rewarded. And emerging studies link this type of pervasive target-setting technology with negative impacts on mental health of workers. Drivers, in particular, contend with the pressure of near-constant micro-management and behavioral analysis by their phone. Such constant electronic surveillance can increase stress levels. And extensive research links job‑related stress to ulcers and other cardiovascular disorders. Consider the illustrative experience of this delivery driver:

I used to enjoy the job but technology watches over us and management by telematics reports too much; it has affected my mental health… Inward facing cameras are being used to discipline so many drivers from every aspect of what you do. I have found in my case it makes you very nervous and jittery about doing your job.

  1. Increased Risk of Discrimination

Generally, one of the most documented harms of data-driven technologies for workers is the prospect of discrimination based on race, gender, or disability, especially in hiring and firing software. The classic scenario is an algorithm that is trained to evaluate staff productivity against the standard of a company’s current workforce in the existing dataset – often white, able-bodied, and male – adversely impacting those outside the usual demographics.

Specifically, the absence of human intermediaries can also hamper reasonable accommodations related to disability, pregnancy, or religious observance. Since the algorithm measures efficiency without accounting for fluid unavoidable externalities, nor the physical condition of the individual, it can have a disparate impact on workers with a legitimate reason to request a different evaluation standard.

Often, these accommodations are granted through an interactive process between employer and employee: two humans. Generally, an employee initiates the interactive process by notifying the employer of the need for a reasonable accommodation — a conversation that can be sensitive and even difficult. If an employee’s primary interface with his employer is a chatbot or an algorithm, initiating that process may be even more difficult.

Human-in-Command

Consider the prospect of “human-in-command” termination procedures, an approach supported by the European Economic and Social Committee’s Opinion on Artificial Intelligence. It requires that “workers must be involved in developing these kinds of complementary AI systems, to ensure that the systems are useable and that the worker still has sufficient autonomy and control (human-in-command), fulfilment and job satisfaction”.

To fulfill this objective, it is crucial that any termination decision suggested or effected by algorithm be subject to review by human beings who “look behind” the data and recommendations to terminate: Is the data tainted by bias? Could the cause of the problem have been outside the employee’s control? Data-driven recommendations must be investigated by humans, who remain legally accountable, alongside their corporation, for the termination and its consequences. The fact that terminations were issued by algorithmic processes should never exclude personal liability – and the contours of that liability should one day be a topic of collective bargaining.

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.