Editorials

Beyond Misclassification: Gig Economy Discrimination Outside Employment Law

Noah Zatz

Noah Zatz is Professor of Law at the University of California, Los Angeles.

Amidst all the hubbub about whether Uber drivers et al. are to be classified legally as “employees,” some important things are easy to miss.  First, these employee status issues are longstanding and widespread beyond the gig economy.  Misclassification plagues workers who provide essential services but may be easier to ignore when commentators and policymakers do not personally summon them by smartphone.  Consider those online gift orders handled by port truck drivers and warehouse workers.  Second, “employment law” is not the only body of law that regulates work.

Both points are illustrated by the simmering concern about how customer feedback ratings may hard-wire discrimination into the supervisory techniques of gig economy platforms.  The platforms’ use of customer ratings to discipline or terminate workers highlights deep problems in civil rights law regardless of employee status.  As I’ve noted elsewhere, firms’ rote reliance on biased customer feedback reveals the weakness of the law’s fetish for “discriminatory intent.”  But courts have policed the boundaries of intent less rigidly than sometimes realized.  Consequently, a legal attack on feedback-based discrimination may not need to rely on the expansive “disparate impact” standard available in employment discrimination claims.  Instead, legal challenges might proceed under civil rights laws forbidding discrimination in contracting and in public accommodations.

Imagine that customers seeking rides to the airport are, on average, less comfortable or just less pleased being driven around by a dark-skinned immigrant with a Muslim-sounding name than by a cheerfully underemployed, native-born white recent college graduate.  If, knowing this, a taxi company was more liberal in its hiring of white drivers, civil rights laws of all stripes would forbid the conduct as “intentional discrimination.”   That would be true even if the company was motivated purely by a desire to please its customers and even if whitening its driver pool would deliver a competitive advantage.

What if, instead, the firm simply lets customers vote on which workers to hire or retain?  Now the employer just tallies up the votes without ever needing to consider, or even to know, a worker’s race.  Substantively, the outcome is the same:  some workers get a boost because of their race/religion/immigration status while others get the shaft.  Yet formally, outsourcing individualized assessment to customers launders out the firm’s discriminatory intent and allows it to claim that its hands are clean.

This elevation of form over substance is misguided, but conventional accounts of discriminatory intent can’t explain why (and  no, it has nothing to do with another hot topic, “implicit bias”).  Sharpening the problem further is the formal irrelevance of the firm’s knowledge that a worker was rated lower because of his race or its capacity to easily prevent or counteract this bias.  Even when failing to act on this knowledge and exercise this capacity, the employer still could trot out the defense that it terminated the worker “in spite of,” not “because of,” the racial character of the customers’ judgment.  Therefore, it acted without discriminatory intent.

Fortunately, the law sometimes fails to follow this formal logic to its foolish conclusion, even if the wise outcome is not accompanied by a coherent rationale.  For instance, in the landmark Manhart case, the Supreme Court forbade employers from charging women higher pension contributions based on the (accurate) assessment that they likely would live longer than men.  Later, in its less-heralded Arizona Governing Committee decision, the Court rejected an employer’s attempt to evade Manhart by outsourcing discrimination.  That employer credited its retirees with a lump sum without regard to sex, but it then directed them to convert this credit into an annuity through an outside vendor that offered smaller payments to women.

More recently, an appellate court found that it could be unlawful for a taxi company to terminate a driver because the firm’s insurance company would not cover the driver due to his age, even though the employer merely was applying its “neutral” policy of requiring drivers to be insurable.  And, as I wrote about at length, courts routinely require employers to take affirmative steps to monitor, prevent, and correct sex- and race-based harassment of workers by customers.

None of these scenarios involve employer conduct that satisfies formal definitions of discriminatory intent.  Despite that, all these claims have succeeded without resort to more controversial, and less widely available, claims of “disparate impact.”  When courts face individual workers who clearly have been injured because of their protected status, they often sidestep the formalism of discriminatory intent and turn instead to questions bearing on the employer’s responsibility to prevent or correct the injury.

For the gig economy, such an inquiry into responsibility would explore the feasibility of detecting biased customer feedback and avoiding reliance upon it for worker discipline.  A crucial point here is that firms creating and running these platforms are the ones that decide how to structure, elicit, and act upon customer feedback.  What questions do they ask, what answer format is available, how is it analyzed and how quickly, and with what other information is the feedback integrated?

Uber, for instance, not only requires customers to provide ratings but goes beyond their bottom-line number to allow and address accompanying textualized complaints.  That same capacity could be used to detect and exclude feedback accompanied by comments indicating bias.

Furthermore, as others have suggested, the voracious appetite for data gathering and analysis characteristic of these platforms, and upon which they stake their claims to be “new” technology companies, could be brought to bear here, too.  There are ample opportunities to analyze, and adjust for, various forms of bias in drivers’ ratings, as well as to identify and discount customers whose pattern of ratings suggest bias.

The value, and necessity, of such monitoring is suggested by a self-study performed by WordStream.  It found that customer satisfaction ratings overwhelming favored men over women among its marketing representatives.  This outcome sharply diverged from other job performance metrics, strongly implying that customers were expressing biased perception and evaluation.

Legal claims of discrimination will press these questions regardless of how employee status controversies are resolved.  A Reconstruction-era federal statute known as Section 1981 prohibits race and immigration status discrimination in all contracts, not just employment, and there is no doubt that Uber and its drivers are in some kind of contractual relationship.  Although the Supreme Court has barred “disparate impact” suits under Section 1981, that may not present as insurmountable a barrier as commonly assumed.  Instead, Section 1981 generally applies the same disparate treatment and hostile work environment concepts as Title VII, and so the points made above should apply.  Even the important General Building Contractors Association case limiting the scope of Section 1981 may be less restrictive than generally thought.

Additionally, federal and state laws barring discrimination in public accommodations or, as in California, “business establishments” more generally, may apply, and with a much longer list of protected statuses.  Thus far, these statutes have been invoked to challenge discrimination against end consumers like Uber passengers.  But note that Uber’s theory is that drivers, too, are its customers, paying for access to the platform.  Thus, if Uber convinces courts that it isn’t in the driving business at all but instead merely creates an online marketplace, the result will simply be drivers stating their claims as customers seeking fair access to the platform rather than as employees seeking fair employment opportunities.  Again, limitations on disparate impact claims may be beside the point.

The WordStream example also is a reminder that, as with employee status, gig economy platforms are best seen as the flashy tip of a much larger iceberg.  The conceptual and technical issues presented by customer feedback about Uber drivers apply to reputation and feedback systems more generally.  As Lu-in Wang insightfully discusses in a forthcoming article, the legal challenges of addressing employers’ use of biased customer feedback arise throughout a much broader movement toward “management by customer” that pervades the service sector.  If attention to the gig economy can stimulate critical analysis of this broader phenomenon and some new thinking about what it means to discriminate, that will be a welcome development indeed.

More From OnLabor

See more

Enjoy OnLabor’s fresh takes on the day’s labor news, right in your inbox.