Abstract
More than a quarter century after civil rights activists pioneered America’s first ridesharing network, the connections between transportation, innovation, and discrimination are again on full display. Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combating stubbornly persistent barriers to transportation. But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in companies including Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery.
Surveying the methodologies employed by these studies reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical level, not an individual one. As a structural matter, this isn’t coincidental. As America transitions to an increasingly algorithmic society, all signs now suggest we are leaving traditional brick-and-mortar establishments behind for a new breed of data-driven ones. Discrimination, in other words, is going digital. And when it does, it will manifest itself—almost by definition—at a macroscopic scale. Why does this matter? Because not all of our civil rights laws cognize statistically-based discrimination claims. And as it so happens, Title II could be among them.
This piece discusses the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven establishments with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of “disparate effect” claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses an existential threat to the statute but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. It is the algorithms underlying “platform economy businesses” that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. Second, this essay joins others in calling for policy reforms focused on modernizing our civil rights canon. It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update. If it is to remain relevant in the years and decades ahead, Title II must become Title 2.0.
Introduction
For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics.
—Oliver Wendell Holmes, Jr.
The future is already here—it is just unevenly distributed.
—William Gibson
It took just four days after Rosa Parks’ arrest to mount a response. Jo Ann Robinson, E.D. Nixon, Ralph Abernathy, and a little-known pastor named Martin King, Jr. would head a coalition of activists boycotting Montgomery, Alabama’s public buses.
Leaders announced the plan the next day, expecting something like a 60% turnout.
But to their surprise, more than 90% of the city’s black ridership joined. The total exceeded 40,000 individuals.
Sheer numbers—they quickly realized—meant that relying on taxis as their sole means of vehicular transport would be impossible. Instead, they got creative. The coalition organized an elaborate system of carpools and cabbies that managed to charge rates comparable to Montgomery’s own municipal system.
And so it was that America’s first ridesharing network was born.
Fast forward some sixty years to the present and the connections between transportation, innovation, and civil rights are again on full display. Nowadays, the networking system pioneered by Montgomery’s protestors is among the hottest tickets in tech. Newly minted startups launching “ridesharing platforms,” “carsourcing software,” “delivery sharing networks,” “bikesharing” offerings, “carpooling apps,” and “scooter sharing” schemes are a seemingly daily fixture of the news. And just as was true during the Civil Rights Movement, discrimination continues to be a hot-button issue.
Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combating discriminatory barriers to transportation that stubbornly persist in modern America.
But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in the likes of Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery.
The weight of the evidence suggests a cautionary tale: The same technologies capable of combating modern discrimination also appear capable of producing it.
Surveying the methodologies employed by these reports reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical—not individual—scale.
As a structural matter, this isn’t coincidental. Uber, Amazon, and a host of other technology leaders have transformed traditional brick-and-mortar business models into data-driven ones fit for the digital age. Yet in doing so, they’ve also taken much discretion out of the hands of individual decision-makers and put it into hands of algorithms.
This transfer holds genuine promise of alleviating the kinds of overt prejudice familiar to Rosa Parks and her fellow activists. But is also means that when discrimination does occur, it will manifest—almost by definition—at a statistical scale.
This piece discusses the implications of this fast-approaching reality for one of our most canonical civil rights statutes, Title II of the Civil Rights Act of 1964.
Today, a tentative consensus holds that certain of our civil rights laws recognize claims of “discriminatory effect” based in statistical evidence. But Title II is not among them.
Indeed, more than a quarter century after its passage, it remains genuinely unclear whether the statute encompasses disparate effect claims at all.
This essay explores the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven companies with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship
by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of disparate effect claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses a threat to the civil rights law but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature.
It is the algorithms underlying emerging platform economy businesses that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it.
Second, this essay joins other scholars in calling for policy reforms focused on modernizing our civil rights canon.
It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update.
If the statute is to remain relevant in the years and decades ahead, Title II must become Title 2.0.
I. The Rise of Data-Driven Transportation
Today, algorithms drive society. They power the apps we use to skirt traffic, the networking systems we use to dispatch mobility services, and even the on-demand delivery providers we use to avoid driving in the first place.
For most Americans, paper atlases have been shrugged. Algorithms, of one variety or another, now govern how we move. And far from being anywhere near “peak”
levels of digitization, society’s embrace of algorithms only appears to be gaining steam. With announcements of new autonomous and connected technologies now a daily fixture of the media, all signs suggest that we’re at the beginning of a long road to algorithmic ubiquity. Data-driven transportation might rightly be described as pervasive today. But tomorrow, it is poised to become the de facto means by which people, goods, and services get from Point A to B.
Many have high hopes for this high-tech future, particularly when it comes to combating longstanding issues of discrimination in transportation. Observers have hailed the likes of Uber and Lyft as finally allowing “African American customers [to] catch a drama-free lift from point A to point B.”
They’ve championed low-cost delivery services, such as Amazon and Grubhub, as providing viable alternatives to transit for individuals with disabilities.
And they’ve even praised navigation apps, like Waze, for bursting drivers’ “very white, very male, very middle-to-upper class” bubbles.
It is through algorithmic transportation, in other words, that we’re beginning to glimpse a more equitable America—with our mobility systems finally exorcised of the types of discrimination that stubbornly persist today, some fifty years after the passage of modern civil rights legislation.
A. Out With the Old Bias, In With the New?
As with seemingly all significant technological breakthroughs, however, algorithmic transportation also gives rise to new challenges. And discrimination is no exception. Already, multiple studies have revealed the potential for racial bias to infiltrate the likes of Uber, Lyft, Grubhub, and Amazon.
The National Bureau of Economic Research’s (“NBER”) groundbreaking study revealing a pattern of racial discrimination in Uber and Lyft services is one such exemplar.
After deploying test subjects on nearly 1,500 trips, researchers found that black riders
experienced significantly higher wait times and trip cancellations than their white counterparts.
The NBER’s piece was preceded—months earlier—by a similarly provocative report from Jennifer Stark and Nicholas Diakopoulus.
Using a month’s worth of Uber API data, the scholars found a statistical correlation between passenger wait times and neighborhood demographic makeup. The upshot? That Uber’s patented “surge pricing algorithm” resulted in disproportionately longer wait times for people of color, even after controlling for factors such as income, poverty, and population density.
Another example comes from Bloomberg, which reported in 2017 that Amazon’s expedited delivery services tended to bypass areas composed of predominantly black residents.
Bloomberg’s findings were subsequently buttressed by a Washington Post piece revealing that the “delivery zones” of services such as Grubhub, Door Dash, Amazon Restaurants, and Caviar appeared highly limited in low-income, minority-majority areas.
B. Discrimination’s Digital Architecture
While the patterns and practices uncovered by these reports vary dramatically, they share one commonality whose importance cannot be overstated. Each of them measures racial bias at a statistical—not individual—scale.
As a structural matter, this observation is in some sense unavoidable. When discrimination occurs in traditional brick-and-mortar contexts, it generally does so out in the open. It is difficult to turn someone away from Starbucks,
after all, without them being made aware of the denial, even if the precise rationale is not clear.
But as the means by which Americans secure their transportation, food, and lodging goes increasingly digital, the “architecture”
of discrimination will take on a different face. Our interactions with cab companies, public transportation providers, and delivery services will be mediated by algorithms that we neither see nor necessarily understand. And face-to-face interactions with service providers, meanwhile, will become a thing of the past.
In countless respects, this transition is cause for celebration. A society driven by algorithms is one that holds genuine hope of eliminating the types of overt discrimination that drove civil rights reforms of past eras. But in its stead, an emerging body of evidence suggests that subtler forms of discrimination may persist—ones that could challenge the doctrinal foundations on which our civil rights laws currently rest.
II. When Blackletter Civil Rights Law Isn’t Black and White
When it comes to holding private entities that provide our transportation, food, and lodging accountable for racial discrimination, the usual suspect is Title II of the Civil Rights Act. Title II sets forth the basic guarantee that “[a]ll persons [are] entitled to the full and equal enjoyment of the goods, services, facilities, privileges, advantages, and accommodations of any place of public accommodation. . . without discrimination or segregation on the ground of race, color, religion, or national origin.”
The statute defines “public accommodation” broadly as essentially any “establishment affecting interstate commerce.”
Pursuing a Title II claim requires, first, establishing a prima facie case of discrimination. To do so, claimants must show they: (1) are members of a protected class; (2) were denied the full benefits of a public accommodation; and (3) were treated less favorably than others
outside of the protected class.
A. The Intent Requirement and the Man of Statistics
At first blush, establishing these prima facie elements using the types of evidence documented by the reports noted in Part I(A) may seem straightforward. But there’s just one tiny detail standing in the way. As it turns out, no one knows whether Title II actually prohibits the kinds of racial disparities uncovered by the studies.
Not all civil rights laws, after all, allow claimants to use statistically-disparate impacts as evidence of discrimination. Title VI, for example, does not, whereas Title VII does.
This distinction owes, in large part, to the antidiscrimination canon’s “intent requirement,” which draws a doctrinal dividing line between acts exhibiting “discriminatory intent” and those, instead, exhibiting “discriminatory effects.”
To oversimplify, acts of intent can be understood as overt, “invidious acts of prejudiced decision-making.”
Acts of effect, meanwhile, are those that “actually or predictably . . . result[] in a disparate impact on a group of persons” even when the explicit intent behind them is not discriminatory.
Ask Rosa Parks to give up her seat for a white passenger? The civil rights claim filed in response will likely take a narrow view of the interaction, examining the discrete intent behind it. Systematically route buses in such a way that they bypass Rosa Parks altogether? Under the right circumstances, this could be evidence of discrimination equally as troubling as in the former scenario. But the civil rights claim it gave rise to would likely entail a far wider view of the world—one that couched its arguments in statistics.
Today, a tentative consensus holds that theories involving discriminatory effects are available under the Fair Housing Act, the Age Discrimination and Employment Act, certain Titles of the Americans With Disabilities Act, and Title VII of the Civil Rights Act. When it comes to Title II, however, the jury is still out. Neither the Supreme Court, a major circuit court, nor a federal administrative body has resolved the issue to date, and “there is a paucity of cases analyzing it.”
B. Hardie’s Open Question
Uncertainties surrounding Title II’s scope most recently came to a head in Hardie v. NCAA. The case involved a challenge to the collegiate association’s policy of banning convicted felons from coaching certain tournaments. The plaintiff, Dominic Hardie, alleged that the policy disparately impacted blacks, putting the question of Title II’s “discriminatory effect” liability at center stage.
The court of first impression ruled against Hardie, finding that Title II did not cognize such claims. But on appeal, the case’s focal point changed dramatically. In a surprise turn of events, the NCAA abandoned its structural argument against disparate impact liability outright. Instead, it conceded that Title II did, in fact, recognize statistical effects but asserted that the NCAA’s policy was, nonetheless, not a violation.
Thus, when the case came before the 9th Circuit, the question of whether Title II encompassed discriminatory effects was, essentially, rendered moot. The court ruled in favor of the NCAA’s narrower argument but went out of its way to emphasize that it had not decided the question of discriminatory effect liability. And no other major appeals court has addressed the issue since.
C. Title II’s Fair Housing Act Moment
It was not long ago that another civil rights centerpiece—the Fair Housing Act of 1968 (FHA)—found itself at a similar crossroads. The FHA makes it illegal to deny someone housing based on race. But a half century after the statute’s passage, the question of whether it prohibited disparate effects had not been tested in our highest court.
By 2015, the Supreme Court had twice taken up the issue in two years.
And twice, the cases had settled in advance of a ruling.
Then came Texas Department of Housing and Community Affairs v. The Inclusive Communities Project, alleging that a state agency’s allocation of tax credits disparately impacted the housing options of low-income families of color.
This time, there was no settlement. And the ruling that followed was subsequently described as the “most important decision on fair housing in a generation.”
Writing for the 5-4 majority, Justice Kennedy affirmed that the FHA extended to claims of both discriminatory intent and effect.
Kennedy was careful to note that the FHA’s passage occurred at a time when explicitly racist policies—such as zoning laws, racial covenants, and redlining—were the norm. But the Justice, nonetheless, stressed that more modern claims alleging racially disparate impacts were also “consistent with the FHA’s central purpose.”
D. The New Back of the Bus
Much like the FHA, Title II arrived on the scene when discriminatory effect claims were far from the leading concern among civil rights activists. As Richard Epstein writes:
“Title II was passed when memories were still fresh of the many indignities that had been inflicted on African American citizens on a routine basis. It took little imagination to understand that something was deeply wrong with a nation in which it was difficult, if not impossible, for African American citizens to secure food, transportation, and lodging when traveling from place to place in large sections of the country. In some instances, no such facilities were available, and in other cases they were only available on limited and unequal terms.”
The paradigmatic act of discrimination, in other words, was intentional, overt, and explicitly racial.
Today, however, we are heading toward a world in which this paradigm is apt to turn on its head. Gone will be the days of racially explicit denials of service such as the well-documented phenomena of “hailing a cab while black,” “dining while black,” “driving while black,” or “shopping while black.”
But as an increasing body of evidence suggests, inequality will not simply disappear as a consequence. Rather, discrimination will go digital. And when it does occur, it will likely manifest not as a discrete act of individual intent but instead as a statistically disparate effect.
With this future in view, forecasting the consequences for Title II requires little speculation. Absent the ability to bring statistically-based claims against tomorrow’s data-driven establishments, Title II could be rendered irrelevant.
If America is to deliver on its guarantee of equal access to public accommodations, its civil rights laws must reach the data-driven delivery services, transportation providers, and logistics operators that increasingly move our society.
Failing to do so simply because these business models were not the norm at the time of the statute’s passage could lead to tragic results. As Oliver Wendell Holmes, Jr. wrote more than a century ago:
“It is revolting to have no better reason for a rule of law than that it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past.”
To save one of our antidiscrimination canon’s most iconic statutes from such a fate, all signs now suggest it will need a doctrinal update. Title II, in software parlance, must become Title 2.0.
III. A Policy Roadmap for Title 2.0
With the foregoing analysis in our rearview mirror, it is now possible to explore the road ahead. The policy challenges of applying Title II to a data-driven society appear to be at least threefold. Policymakers should establish: (1) whether Title II cognizes statistically-based claims; (2) what modern entities are covered by Title II; and (3) what oversight mechanisms are necessary to detect discrimination by such entities? The following sections discuss these three challenges, as well as the steps policymakers can take to address them through judicial, legislative, or regulatory reform.
A. Statistically-based Claims in a Data-Driven Society
The first, and most obvious, policy reform entails simply clarifying Title II’s cognizance of statistically based claims. Such clarification could come at the judicial or regulatory level, as occurred with the FHA. Or it could come at the legislative level, as occurred with Title VII.
Though the question of whether litigants can sustain statistical claims under Title II may seem like an all-or-nothing proposition, recent experience shows this isn’t actually true. Short of directly translating Title VII theories to Title II, there exist numerous alternatives. Justice Kennedy himself noted as much in Inclusive Communities when he remarked that “the Title VII framework may not transfer exactly to [all other] context[s].”
Nancy Leong and Aaron Belzer convincingly argue that one framing might involve adopting a modern take on discriminatory intent claims. The scholars assert that even if intent is deemed essential under Title II, statistically based claims could nevertheless satisfy the requirement.
In their telling, the intent requirement could manifest through a company’s “decision to continue using a platform design or rating system despite having compelling evidence that the system results in racially disparate treatment of customers.”
Under this view, the claim would then be distinguishable from unintentional claims because “once the aggregated data is known to reflect bias and result in discrimination,” its continued use would constitute evidence of intent.
Not only would this approach countenance Kennedy’s admonition in Inclusive Communities “that disparate-impact liability [be] properly limited,”
it may also offer an elegant means of addressing the concerns raised by dissenting opinions that Title II claims demonstrate a defendant’s discriminatory “intent.”
Policymakers should, therefore, take this line of analysis into consideration when clarifying Title II’s scope.
B. Public Accommodations in a Data-Driven Society
Although this essay has thus far presumed that large-scale algorithmic transportation services like Uber and Amazon are covered by Title II, even that conclusion remains unclear. As enacted, Title II is actually silent as to whether it covers conventional cabs, much less emerging algorithmic transportation models.
A second policy reform, therefore, would entail clarifying whether Title II actually covers such entities in the first place.
Here, understanding the origins of the Civil Rights Act of 1964 is again useful. The statute lists several examples of public accommodations that were typical of America circa 1960.
Some courts have suggested that this list is more or less exhaustive.
But that view is inconsistent with the law’s own language.
And numerous others have taken a broader view of the term “public accommodations,” which extends to entities that were not necessarily foreseen by the statute’s original drafters.
Policymakers in search of analogous interpretations of public accommodations laws need look no further than the Americans With Disabilities Act (ADA). Like Title II, the ADA covers places of public accommodation. And, again like Title II, its drafters listed specific entities as examples—all of which were the types of brick-and-mortar establishments characteristic of the time. But in the decades since its passage, the ADA’s definition has managed to keep pace with our increasingly digital world. Multiple courts have extended the statute’s reach to distinctly digital establishments, including popular websites and video streaming providers.
Policymakers should note, however, that Uber and Lyft have fiercely resisted categorization as public accommodations.
In response to numerous suits filed against them, the companies have insisted they are merely “platforms” or “marketplaces” connecting sellers and buyers of particular services.
As recently as 2015, this defense was at least plausible. And numerous scholars have discussed the doctrinal challenges of applying antidiscrimination laws to these types of businesses.
But increasingly, companies like Uber, Lyft, and Amazon are shifting away from passive “platform” or “marketplace” models into more active service provider roles.
All three, for example, now deploy transportation services directly. And a slew of similarly situated companies appear poised to replicate this model.
For most such companies, passive descriptors like “platform” or “marketplace” are no longer applicable. Our laws should categorize them accordingly.
C. Oversight in a Data-Driven Society
Finally, regulators should consider implementing oversight mechanisms that allow third parties to engage with the data necessary to measure and detect discrimination. In an era of big data and even bigger trade secrets, this is of paramount importance. Because companies retain almost exclusive control over their proprietary software and its resultant data, barriers to accessing the information necessary even to detect algorithmic impacts often can be insurmountable. And the ensuing asymmetries can render discrimination or bias effectively invisible to outsiders.
Another benefit of oversight mechanisms is their ability to promote good corporate governance without the overhead of more intrusive command-and-control regulations. Alongside transparency, after all, comes the potential for extralegal forces such as ethical consumerism, corporate social responsibility, perception bias, and reputational costs to play meaningful roles in checking potentially negative behaviors.
By pricing externalities through the threat of public or regulatory backlash, these and other market forces can help to regulate sectors undergoing periods of rapid disruption with less risk of chilling innovation than traditional regulation.
Some scholars have proposed federal reforms—akin to those put forward by the Equal Employment Opportunity Commission,
the Department of Housing and Urban Development,
and the Department of Education —as a means of implementing oversight mechanisms for Title II. But state-level action, in this instance, may be more effective. A multi-fronted push that is national in scope provides a higher likelihood of successful reform. And much like the “Brussels Effect” documented at an international level, intra-territorial policies imposed on inter-territorial entities can have extra-territorial effects within the U.S. As the saying goes: “As goes California, so goes the nation.”
As a parting note, it cannot be stressed enough that mere “disclosure” mechanisms are not necessarily enough. For oversight to be meaningful, it must be actionable—or, in Deirdre Mulligan’s phrasing, “contestable.” That is, it must allow downstream users to “contest[] what the ideal really is.” Moreover, if oversight is to be accomplished through specific administrative bodies, policymakers must ensure that those bodies have the technical know how and financial resources available to promote public accountability, transparency, and stakeholder participation. Numerous scholars have explored these concerns at length, and regulators would do well to consider their insights.
Conclusion
Following any major technological disruption, scholars, industry leaders, and policymakers must consider the challenges it poses to our existing systems of governance. Will the technology meld? Must our policies change?
Algorithmic transportation is no exception. This piece examines its implications for one of America’s most iconic statutes: Title II of the Civil Rights Act of 1964. As algorithms expand into a vast array of transportation contexts, they will increasingly test the doctrinal foundations of this canonical law. And without meaningful intervention, Title II could soon find itself at risk of irrelevance.
But unlike policy responses to technological breakthroughs of the past, those we have seen so far offer genuine hope of timely reform. As Ryan Calo notes, unlike a host of other transformative technologies that escaped policymakers’ attention until too late, this new breed “has managed to capture [their] attention early [] in its life-cycle.”
Can this attention be channeled in directions that ensure that our most important civil rights laws keep pace with innovation? That question, it now appears, should be on the forefront of our policy agenda.
† Legal Fellow, Center for Automotive Research at Stanford (CARS); Affiliate Scholar of CodeX: The Center for Legal Informatics at Stanford and the Stanford Machine Learning Group. The author particularly thanks Chris Gerdes, Stephen Zoepf, Rabia Belt, and the Center for Automotive Research at Stanford (CARS) for their generous support.