March 2019

With roughly a clip a month – most of these being corporate fluff – Waymo’s YouTube channel is not the most exciting nor informative one. At least, those (like me) who keep looking for clues about Waymo’s whereabouts should not expect anything to come out of there.

That was until February 20th, when Waymo low-key published a 15 second clip of their car in action – the main screen showing a rendering of what the car “sees” and the corner thumbnail showing the view from the dash cam. The key point: Waymo’s car apparently crosses a broken-lights, police-controlled intersection without any hurdle. Amazing! Should we conclude that level 5 is at our very doorsteps?

The car and tech press was quick to spot this one, and reports were mostly praise. Yet Brad Templeton, in his piece for Forbes pinpoints at a few things that the clip does not say. First, we have the fact that Waymo operates in a geographically-enclosed area, where the streets, sidewalk and other hard infrastructure (lights, signs, and probably lines) are pre-mapped and already loaded in the algorithm. In other words, Waymo’s car does not discover stuff as it cruises along the streets of Northern California. Moreover, the street lights here do not work and so technically, this is just another four-way stop-signed intersection, with the difference that it is rather busy and there is a traffic police directing traffic in the middle. Finally, the car just goes straight, which is by far the easiest option (no left turn, for example…)

Beyond that, what Waymo alleges and wants us to see, is that car “recognizes” the policeman, or at the very least, recognizes that there is something person-shaped standing in the middle of the intersection and making certain gestures at the car, and that the car’s sensors and Waymo’s algorithms are now at the level of being able to understand hand signals of law enforcement officers.

Now I heard, less than a year ago, the CEO of a major player in the industry assert that such a thing was impossible – in reference to CAVs being able to detect and correctly interpret hand signals cyclists sometime use. It seems that a few months later, we’re there. Or are we? One issue which flew more or less under the radar, is how exactly does the car recognize the LEO here? Would a random passerby playing traffic cop have the same effect? If so, is that what we want?

As a member of the “Connected and Automated Vehicles: Preparing for a Mixed Fleet Future” Problem Solving Initiative class held at the University of Michigan Law School last semester, my team and I have had the opportunity to think about just that – how to make sure that road interactions stay as close as possible as they are today – and conversely how to foreclose awkward interactions or possible abuses that “new ways to communicate” would add. Should a simple hand motion be able to “command” a CAV? While such a question cuts across many domains, our perspective was a mostly legal one and our conclusion was that any new signal that CAV technology enables (from the perspective of pedestrians and other road users) should be non-mandatory and limited to enabling mutual understanding of intentions without affecting the behavior of the CAV. Now what we see in this video is the opposite; seemingly, the traffic police person is not equipped with special beacons that broadcast some form of “law enforcement” signal, and it is implied – although, unconfirmed – that there is no human intervention. We are left awed, maybe, but reassured? Maybe not.

The takeaway may be just this: the issues raised by this video are real ones, and are issues Waymo, and others, will at some point have to address publicly. Secrecy may be good for business, but only so much. Engagement by key industry players is of the highest importance, if we want to foster trust and avoid having the CAV technology crash land in our societies.

Earlier this month, the Journal of Law and Mobility hosted our first annual conference at the University of Michigan Law School. The event provided a great opportunity to convene some of the top minds working at the intersection of law and automated vehicles. What struck me most about the conference, put on by an organization dedicated to Law and mobility, was how few of the big questions related to automated vehicles are actually legal questions at this point in their development.

The afternoon panel on whether AVs should always follow the rules of the road as written was emblematic of this juxtaposition. The panel nominally focused on whether AVs should follow traffic laws. Should an automated vehicle be capable of running a red light, or swerving across a double yellow line while driving down the street? Should it always obey the posted speed limit?

The knee-jerk reaction of most people would probably be something along the lines of, “of course you shouldn’t program a car that can break the law.” After all, human drivers are supposed to follow the law. So why should an automated vehicle, which is programmed in advance by a human making a sober, conscious choice, be allowed to do any differently?

Once you scratch the surface though, the question becomes much more nuanced. Human drivers break the law in all kinds of minor ways in order to maintain safety, or in response to the circumstances of the moment. A human driver will run a red light if there is no cross-traffic and the car bearing down from behind is showing no signs of slowing down. A human will drive into the wrong lane or onto the shoulder to avoid a downed tree branch, or a child rushing out into the street. A human driver may speed away if they notice a car near them acting erratically. All of these actions, although they violate the law, may be taken in the interest of safety in the right circumstances. Even knowing they violated the law, a human driver who was ticketed in such a circumstance would feel their legal consequence was unjustified.

If automated vehicles should be able to break the law in at least some circumstances, the question shifts – which circumstances? Answering that question is beyond the scope of this post. At the moment, I don’t think anyone has the right answer. Instead, the point of this post is to highlight the type of moment-to-moment decisions every driver makes every day to keep themselves and those around them safe. The rules of the road provide a rough cut, codifying what will be best for most people most of the time. They could not possibly anticipate every situation and create a special legal rule for that situation. If they tried, the traffic laws would quickly grow to fill several libraries.

In my view, the question of whether an AV should be able to break the law is only tangentially a legal question. After arriving at an answer of, “probably sometimes,” the question quickly shifts to when, and in what circumstances, and whether the law needs to adapt to make different maneuvers legal. These questions have legal aspects to them, but they are also moral and ethical questions weighted with a full range of human driving experience.  Answering them will be among the most important and difficult challenges for the AV industry in the coming years.

The “Trolley Problem” has been buzzing around for a while now, so much that it became the subject of large empirical studies which aimed at finding a solution to it that be as close to “our values” as possible, as more casually the subject of an episode of The Good Place.

Could it be, however, that the trolley problem isn’t one? In a recent article, the EU Observer, an investigative not-for-profit outlet based in Brussels, slashed at the European Commission for its “tunnel vision” with regards to CAVs and how it seems to embrace the benefits of this technological and social change without an ounce of doubt or skepticism. While there are certainly things to be worried about when it comes to CAV deployment (see previous posts from this very blog by fellow bloggers here and here) the famed trolley might not be one of those.

The trolley problem seeks to illustrate one of the choices that a self-driving algorithm must – allegedly – make. Faced with a situation where the only alternative to kill is to kill, the trolley problem asks the question of who is to be killed: the young? The old? The pedestrian? The foreigner? Those who put forward the trolley problem usually do so in order to show that as humans, we are forced with morally untenable alternative when coding algorithms, like deciding who is to be saved in an unavoidable crash.

The trolley problem is not a problem, however, because it makes a number of assumptions – too many. The result is a hypothetical scenario which is simple, almost elegant, but mostly blatantly wrong. One such assumption is the rails. Not necessarily the physical ones, like those of actual trolleys, but the ones on which the whole problem is cast. CAVs are not on rails, in any sense of the word, and their algorithms will include the opportunity to go “off-rails” when needed – like get on the shoulder or on the sidewalk. The rules of the road incorporate a certain amount of flexibility already, and such flexibilities will be built in the algorithm.

Moreover, the very purpose of the constant sensor input processed by the driving algorithm is precisely to avoid putting the CAV in such a situation where the only options that remain are collision or collision.

But what if? What if a collision is truly unavoidable? Even then, it is highly misleading to portray CAV algorithm design as a job where one has to incorporate a piece of code specific to every single decision to be made in the course of driving. The CAV will never be faced with an input of the type we all-too-often present the trolley problem: go left and kill this old woman, go right and kill this baby. The driving algorithm will certainly not understand the situation as one where it would kill someone; it may understand that a collision is imminent and that multiple paths are closed. What would it do, then? Break, I guess, and steer to try to avoid a collision, like the rest of us would do.

Maybe what the trolley problem truly reveals is the idea that we are uneasy with automated cars causing accidents – that is, they being machines, we are much more comfortable with the idea that they will be perfect and will be coded so that no accident may ever happen. If, as a first milestone, CAVs are as safe as human drivers, that would certainly be a great scientific achievement. I do recognize however that it might not be enough for the public perception, but that speaks more of our relationship to machines than to any truth behind the murderous trolley. All in all, it is unfortunate that such a problem continues to keep brains busy while there are more tangible problems (such as what to do with all those batteries) which deserve research, media attention and political action.

By Bryan Casey

Cite as: Bryan Casey, Title 2.0: Discrimination Law in a Data-Driven Society, 2019 J. L. & Mob. 36.

Abstract

More than a quarter century after civil rights activists pioneered America’s first ridesharing network, the connections between transportation, innovation, and discrimination are again on full display. Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combating stubbornly persistent barriers to transportation. But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in companies including Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery.

Surveying the methodologies employed by these studies reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical level, not an individual one. As a structural matter, this isn’t coincidental. As America transitions to an increasingly algorithmic society, all signs now suggest we are leaving traditional brick-and-mortar establishments behind for a new breed of data-driven ones. Discrimination, in other words, is going digital. And when it does, it will manifest itself—almost by definition—at a macroscopic scale. Why does this matter? Because not all of our civil rights laws cognize statistically-based discrimination claims. And as it so happens, Title II could be among them.

This piece discusses the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven establishments with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of “disparate effect” claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses an existential threat to the statute but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. It is the algorithms underlying “platform economy businesses” that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. Second, this essay joins others in calling for policy reforms focused on modernizing our civil rights canon. It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update. If it is to remain relevant in the years and decades ahead, Title II must become Title 2.0.


Introduction

For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics.

—Oliver Wendell Holmes, Jr. 1 1. Oliver Wendell Holmes, The Path of the Law, 10 Harv. L. Rev. 457, 469 (1897). ×

The future is already here—it is just unevenly distributed.

—William Gibson 2 2. As quoted in Peering round the corner, The Economist, Oct. 11, 2001, https://www.economist.com/special-report/2001/10/11/peering-round-the-corner. ×

It took just four days after Rosa Parks’ arrest to mount a response. Jo Ann Robinson, E.D. Nixon, Ralph Abernathy, and a little-known pastor named Martin King, Jr. would head a coalition of activists boycotting Montgomery, Alabama’s public buses. 3 3. Jack M. Bloom, Class, Race, and the Civil Rights Movement 140 (Ind. U. Press ed. 1987). × Leaders announced the plan the next day, expecting something like a 60% turnout. 4 4. Id. × But to their surprise, more than 90% of the city’s black ridership joined. The total exceeded 40,000 individuals. 5 5. See History.com Editors, How the Montgomery Bus Boycott Accelerated the Civil Rights Movement, History Channel (Feb. 3, 2010), https://www.history.com/topics/black-history/montgomery-bus-boycott. ×

Sheer numbers—they quickly realized—meant that relying on taxis as their sole means of vehicular transport would be impossible. Instead, they got creative. The coalition organized an elaborate system of carpools and cabbies that managed to charge rates comparable to Montgomery’s own municipal system. 6 6. Id. × And so it was that America’s first ridesharing network was born. 7 7. More precisely, the first large-scale ridesharing network making use of automobiles. ×

Fast forward some sixty years to the present and the connections between transportation, innovation, and civil rights are again on full display. Nowadays, the networking system pioneered by Montgomery’s protestors is among the hottest tickets in tech. Newly minted startups launching “ridesharing platforms,” “carsourcing software,” “delivery sharing networks,” “bikesharing” offerings, “carpooling apps,” and “scooter sharing” schemes are a seemingly daily fixture of the news. And just as was true during the Civil Rights Movement, discrimination continues to be a hot-button issue.

Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combating discriminatory barriers to transportation that stubbornly persist in modern America. 8 8. See infra Part I. × But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in the likes of Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery. 9 9. See infra Part I(A). × The weight of the evidence suggests a cautionary tale: The same technologies capable of combating modern discrimination also appear capable of producing it.

Surveying the methodologies employed by these reports reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical—not individual—scale. 10 10. See infra Part I(A). ×

As a structural matter, this isn’t coincidental. Uber, Amazon, and a host of other technology leaders have transformed traditional brick-and-mortar business models into data-driven ones fit for the digital age. Yet in doing so, they’ve also taken much discretion out of the hands of individual decision-makers and put it into hands of algorithms. 11 11. See infra Part II(D). × This transfer holds genuine promise of alleviating the kinds of overt prejudice familiar to Rosa Parks and her fellow activists. But is also means that when discrimination does occur, it will manifest—almost by definition—at a statistical scale.

This piece discusses the implications of this fast-approaching reality for one of our most canonical civil rights statutes, Title II of the Civil Rights Act of 1964. 12 12. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a (2018). × Today, a tentative consensus holds that certain of our civil rights laws recognize claims of “discriminatory effect” based in statistical evidence. But Title II is not among them. 13 13. See infra Part II(B). Major courts have recently taken up the issue tangentially, but uncertainty still reigns. × Indeed, more than a quarter century after its passage, it remains genuinely unclear whether the statute encompasses disparate effect claims at all.

This essay explores the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven companies with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship 14 14. Of particular note is a groundbreaking piece by Nancy Leong and Aaron Belzer, The New Public Accommodations: Race Discrimination in the Platform Economy, 105 Geo. L. J. 1271 (2017). × by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of disparate effect claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses a threat to the civil rights law but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. 15 15. Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501, 509 (1999) (describing “architecture,” “norms,” “law,” and “markets” as the four primary modes of regulation). × It is the algorithms underlying emerging platform economy businesses that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. 16 16. And, needless to say, there will be a great many more companies that operate outside of it. ×

Second, this essay joins other scholars in calling for policy reforms focused on modernizing our civil rights canon. 17 17. See, e.g., Leong & Belzer supra note 14; Andrew Selbst, Disparate Impact in Big Data Policing, 52 Georgia L. Rev. 109 (2017) (discussing disparate impact liability in other civil rights contexts). × It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update. 18 18. See Jack Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 51 U.C. Davis L. Rev. 1149, 1150 (noting that society is entering a new post-internet phase he calls the “Algorithmic Society”). × If the statute is to remain relevant in the years and decades ahead, Title II must become Title 2.0.

I.          The Rise of Data-Driven Transportation

Today, algorithms drive society. They power the apps we use to skirt traffic, the networking systems we use to dispatch mobility services, and even the on-demand delivery providers we use to avoid driving in the first place.

For most Americans, paper atlases have been shrugged. Algorithms, of one variety or another, now govern how we move. And far from being anywhere near “peak” 19 19. Gil Press, A Very Short History of Digitization, Forbes (Dec. 27, 2015), https://www.forbes.com/sites/gilpress/2015/12/27/a-very-short-history-of-digitization/#1560b2bb49ac (describing digitization technologies in terms of “peak” adoption). × levels of digitization, society’s embrace of algorithms only appears to be gaining steam. With announcements of new autonomous and connected technologies now a daily fixture of the media, all signs suggest that we’re at the beginning of a long road to algorithmic ubiquity. Data-driven transportation might rightly be described as pervasive today. But tomorrow, it is poised to become the de facto means by which people, goods, and services get from Point A to B.

Many have high hopes for this high-tech future, particularly when it comes to combating longstanding issues of discrimination in transportation. Observers have hailed the likes of Uber and Lyft as finally allowing “African American customers [to] catch a drama-free lift from point A to point B.” 20 20. E.g., Latoya Peterson, Uber’s Convenient Racial Politics, Splinter News (Jul. 23, 2015), https://splinternews.com/ubers-convenient-racial-politics-1793849400. × They’ve championed low-cost delivery services, such as Amazon and Grubhub, as providing viable alternatives to transit for individuals with disabilities. 21 21. See, e.g., Winnie Sun, Why What Amazon Has Done For Medicaid And Low-Income Americans Matters, Forbes (Mar. 7, 2018), https://www.forbes.com/sites/winniesun/2018/03/07/why-what-amazon-has-done-for-medicaid-and-low-income-americans-matters/#7dbe2ff1ac76; Paige Wyatt, Amazon Offers Discounted Prime Membership to Medicaid Recipients, The Mighty (Mar. 9, 2018), https://themighty.com/2018/03/amazon-prime-discount-medicaid/. × And they’ve even praised navigation apps, like Waze, for bursting drivers’ “very white, very male, very middle-to-upper class” bubbles. 22 22. E.g., Mike Eynon, How Using Waze Unmasked My Privilege, Medium (Oct. 2, 2015), https://medium.com/diversify-tech/how-using-waze-unmasked-my-privilege-26 355a84fe05. × It is through algorithmic transportation, in other words, that we’re beginning to glimpse a more equitable America—with our mobility systems finally exorcised of the types of discrimination that stubbornly persist today, some fifty years after the passage of modern civil rights legislation.

A.         Out With the Old Bias, In With the New?

As with seemingly all significant technological breakthroughs, however, algorithmic transportation also gives rise to new challenges. And discrimination is no exception. Already, multiple studies have revealed the potential for racial bias to infiltrate the likes of Uber, Lyft, Grubhub, and Amazon. 23 23. See infra notes 14 – 28. See also, e.g., Jacob Thebault-Spieker et al., Towards a Geographic Understanding of the Sharing Economy: Systemic Biases in UberX and TaskRabbit, 21 ACM Transactions on Computer-Human Interaction (2017). × The National Bureau of Economic Research’s (“NBER”) groundbreaking study revealing a pattern of racial discrimination in Uber and Lyft services is one such exemplar. 24 24. Yanbo Ge, et al., Racial and Gender Discrimination in Transportation Network Companies, (2016), http://www.nber.org/papers/w22776. × After deploying test subjects on nearly 1,500 trips, researchers found that black riders 25 25. Or riders with black-sounding names. × experienced significantly higher wait times and trip cancellations than their white counterparts.

The NBER’s piece was preceded—months earlier—by a similarly provocative report from Jennifer Stark and Nicholas Diakopoulus. 26 26. See Jennifer Stark & Nicholas Diakopoulus, Uber Seems to Offer Better Service in Areas With More White People. That Raises Some Tough Questions., Wash. Post (Mar. 10, 2016), https://www.washingtonpost.com/news/wonk/wp/2016/03/10/uber-seems-to-offer-better-service-in-areas-with-more-white-people-that-raises-some-tough-questions/. × Using a month’s worth of Uber API data, the scholars found a statistical correlation between passenger wait times and neighborhood demographic makeup. The upshot? That Uber’s patented “surge pricing algorithm” resulted in disproportionately longer wait times for people of color, even after controlling for factors such as income, poverty, and population density.

Another example comes from Bloomberg, which reported in 2017 that Amazon’s expedited delivery services tended to bypass areas composed of predominantly black residents. 27 27. See David Ingold & Spencer Soper, Amazon Doesn’t Consider the Race of Its Customers. Should It?, Bloomberg (Apr. 21, 2016), https://www.bloomberg.com/graphics/2016-amazon-same-day/. × Bloomberg’s findings were subsequently buttressed by a Washington Post piece revealing that the “delivery zones” of services such as Grubhub, Door Dash, Amazon Restaurants, and Caviar appeared highly limited in low-income, minority-majority areas. 28 28. Tim Carman, D.C. has never had more food delivery options. Unless you live across the Anacostia River., Wash. Post (Apr. 2, 2018), https://www.washingtonpost.com/news/food/wp/2018/04/02/dc-has-never-had-more-food-delivery-options-unless-you-live-across-the-anacostia-river/?utm_term=.dead0dca9e8a. ×

B.         Discrimination’s Digital Architecture

While the patterns and practices uncovered by these reports vary dramatically, they share one commonality whose importance cannot be overstated. Each of them measures racial bias at a statistical—not individual—scale.

As a structural matter, this observation is in some sense unavoidable. When discrimination occurs in traditional brick-and-mortar contexts, it generally does so out in the open. It is difficult to turn someone away from Starbucks, 29 29. This example is pulled from an all-too-recent headline. See Rachel Adams, Starbucks to Close 8,000 U.S. Stores for Racial-Bias Training After Arrests, N.Y. Times (Apr. 17, 2018), https://www.nytimes.com/2018/04/17/business/starbucks-arrests-racial-bias.html. × after all, without them being made aware of the denial, even if the precise rationale is not clear.

But as the means by which Americans secure their transportation, food, and lodging goes increasingly digital, the “architecture” 30 30. See Lessig, supra note 15. × of discrimination will take on a different face. Our interactions with cab companies, public transportation providers, and delivery services will be mediated by algorithms that we neither see nor necessarily understand. And face-to-face interactions with service providers, meanwhile, will become a thing of the past.

In countless respects, this transition is cause for celebration. A society driven by algorithms is one that holds genuine hope of eliminating the types of overt discrimination that drove civil rights reforms of past eras. But in its stead, an emerging body of evidence suggests that subtler forms of discrimination may persist—ones that could challenge the doctrinal foundations on which our civil rights laws currently rest.

II.         When Blackletter Civil Rights Law Isn’t Black and White

When it comes to holding private entities that provide our transportation, food, and lodging accountable for racial discrimination, the usual suspect is Title II of the Civil Rights Act. Title II sets forth the basic guarantee that “[a]ll persons [are] entitled to the full and equal enjoyment of the goods, services, facilities, privileges, advantages, and accommodations of any place of public accommodation. . . without discrimination or segregation on the ground of race, color, religion, or national origin.” 31 31. Civil Rights Act of 1964, 42 U.S.C. § 2000a(a) (2018). × The statute defines “public accommodation” broadly as essentially any “establishment affecting interstate commerce.” 32 32. See id (with the exception of a few carve outs—private clubs being one such example). ×

Pursuing a Title II claim requires, first, establishing a prima facie case of discrimination. To do so, claimants must show they: (1) are members of a protected class; (2) were denied the full benefits of a public accommodation; and (3) were treated less favorably than others 33 33. Id (specifically, “. . . treated less favorably than others outside of the protected class” who are similarly situated). × outside of the protected class. 34 34. Having established a prima facie case, the burden of persuasion then shifts to the defendant. For simplicity’s sake, this piece strictly analyzes prima facie claims and does not delve into the complexities of burden shifting and justifying legitimate business decisions under modern antidiscrimination law. ×

A.         The Intent Requirement and the Man of Statistics

At first blush, establishing these prima facie elements using the types of evidence documented by the reports noted in Part I(A) may seem straightforward. But there’s just one tiny detail standing in the way. As it turns out, no one knows whether Title II actually prohibits the kinds of racial disparities uncovered by the studies.

Not all civil rights laws, after all, allow claimants to use statistically-disparate impacts as evidence of discrimination. Title VI, for example, does not, whereas Title VII does.

This distinction owes, in large part, to the antidiscrimination canon’s “intent requirement,” which draws a doctrinal dividing line between acts exhibiting “discriminatory intent” and those, instead, exhibiting “discriminatory effects.” 35 35. See Implementation of the Fair Housing Act’s Discriminatory Effects Standard, 78 Fed. Reg. 11,460 (Feb. 15, 2013) (codified at 24 C.F.R. § 100.500(1) (2014)). × To oversimplify, acts of intent can be understood as overt, “invidious acts of prejudiced decision-making.” 36 36. Susan Carle, A New Look at the History of Title VII Disparate Impact Doctrine, 63 Flo. L. Rev. 251, 258 (2011). × Acts of effect, meanwhile, are those that “actually or predictably . . . result[] in a disparate impact on a group of persons” even when the explicit intent behind them is not discriminatory. 37 37. See Implementation of the Fair Housing Act’s Discriminatory Effects Standard, supra note 35. ×

Ask Rosa Parks to give up her seat for a white passenger? The civil rights claim filed in response will likely take a narrow view of the interaction, examining the discrete intent behind it. Systematically route buses in such a way that they bypass Rosa Parks altogether? Under the right circumstances, this could be evidence of discrimination equally as troubling as in the former scenario. But the civil rights claim it gave rise to would likely entail a far wider view of the world—one that couched its arguments in statistics. 38 38. Title VII offers plaintiffs a “disparate impact” framework under which they may prove unlawful discrimination alongside the more traditional “disparate treatment” model. 42 U.S.C. § 2000e-2(k)(l)(A) (1994). ×

Today, a tentative consensus holds that theories involving discriminatory effects are available under the Fair Housing Act, the Age Discrimination and Employment Act, certain Titles of the Americans With Disabilities Act, and Title VII of the Civil Rights Act. When it comes to Title II, however, the jury is still out. Neither the Supreme Court, a major circuit court, nor a federal administrative body has resolved the issue to date, and “there is a paucity of cases analyzing it.” 39 39. Hardie v. Nat’l Collegiate Athletic Ass’n, 97 F. Supp. 3d 1163, 1163 (S.D. Cal. 2015), aff’d, 861 F.3d 875 (9th Cir. 2017), and superseded by, 876 F.3d 312 (9th Cir. 2017). ×

B.         Hardie’s Open Question

Uncertainties surrounding Title II’s scope most recently came to a head in Hardie v. NCAA. The case involved a challenge to the collegiate association’s policy of banning convicted felons from coaching certain tournaments. The plaintiff, Dominic Hardie, alleged that the policy disparately impacted blacks, putting the question of Title II’s “discriminatory effect” liability at center stage.

The court of first impression ruled against Hardie, finding that Title II did not cognize such claims. But on appeal, the case’s focal point changed dramatically. In a surprise turn of events, the NCAA abandoned its structural argument against disparate impact liability outright. Instead, it conceded that Title II did, in fact, recognize statistical effects but asserted that the NCAA’s policy was, nonetheless, not a violation. 40 40. See id. (“On appeal, the NCAA does not challenge Hardie’s argument that Title II encompasses disparate-impact claims. . . . Instead, the NCAA asks us to affirm entry of summary judgment in its favor on either of two other grounds advanced below, assuming arguendo that disparate-impact claims are cognizable under Title II.”). ×

Thus, when the case came before the 9th Circuit, the question of whether Title II encompassed discriminatory effects was, essentially, rendered moot. The court ruled in favor of the NCAA’s narrower argument but went out of its way to emphasize that it had not decided the question of discriminatory effect liability. And no other major appeals court has addressed the issue since.

C.         Title II’s Fair Housing Act Moment

It was not long ago that another civil rights centerpiece—the Fair Housing Act of 1968 (FHA)—found itself at a similar crossroads. The FHA makes it illegal to deny someone housing based on race. But a half century after the statute’s passage, the question of whether it prohibited disparate effects had not been tested in our highest court.

By 2015, the Supreme Court had twice taken up the issue in two years. 41 41. See Gallagher v. Magner, 619 F.3d 823, (8th Cir. 2010), cert. dismissed, 565 U.S. 1187, 132 S.Ct. 1306 (2012); Mt. Holly Gardens Citizens in Action, Inc. v. Twp. of Mt. Holly, 658 F.3d 375 (3rd Cir. 2011), cert. dismissed, 571 U.S. 1020, 134 S.Ct. 636 (2013). × And twice, the cases had settled in advance of a ruling.

Then came Texas Department of Housing and Community Affairs v. The Inclusive Communities Project, alleging that a state agency’s allocation of tax credits disparately impacted the housing options of low-income families of color. 42 42. Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 135 S. Ct. 2507, 2514 (2015) [hereinafter “Inclusive Communities”]. × This time, there was no settlement. And the ruling that followed was subsequently described as the “most important decision on fair housing in a generation.” 43 43. Kristen Capps, With Justice Kennedy’s Retirement, Fair Housing Is in Peril, Citylab (Jun. 28, 2018), https://www.citylab.com/equity/2018/06/what-justice-kennedys-retirement-means-for-fair-housing/563924/. ×

Writing for the 5-4 majority, Justice Kennedy affirmed that the FHA extended to claims of both discriminatory intent and effect. 44 44. But his ruling, according to some commenters, took a troublingly narrow view of viable disparate impact claims. × Kennedy was careful to note that the FHA’s passage occurred at a time when explicitly racist policies—such as zoning laws, racial covenants, and redlining—were the norm. But the Justice, nonetheless, stressed that more modern claims alleging racially disparate impacts were also “consistent with the FHA’s central purpose.” 45 45. See Inclusive Communities supra note 42. ×

D.        The New Back of the Bus

Much like the FHA, Title II arrived on the scene when discriminatory effect claims were far from the leading concern among civil rights activists. As Richard Epstein writes:

“Title II was passed when memories were still fresh of the many indignities that had been inflicted on African American citizens on a routine basis. It took little imagination to understand that something was deeply wrong with a nation in which it was difficult, if not impossible, for African American citizens to secure food, transportation, and lodging when traveling from place to place in large sections of the country. In some instances, no such facilities were available, and in other cases they were only available on limited and unequal terms.” 46 46. Richard A. Epstein, Public Accommodations Under the Civil Rights Act of 1964: Why Freedom of Association Counts as a Human Right, 66 Stan. L. Rev. 1241, 1242 (2014). ×

The paradigmatic act of discrimination, in other words, was intentional, overt, and explicitly racial.

Today, however, we are heading toward a world in which this paradigm is apt to turn on its head. Gone will be the days of racially explicit denials of service such as the well-documented phenomena of “hailing a cab while black,” “dining while black,” “driving while black,” or “shopping while black.” 47 47. See, e.g., Matthew Yglesias, Uber and Taxi Racism, Slate (Nov. 28, 2012), http://www.slate.com/blogs/moneybox/2012/11/28/uber_makes_cabbing_while_black_easier.html; Danielle Dirks & Stephen K. Rice in Race and Ethnicity: Across Time, Space, and Discipline 259 (Rodney Coates ed., 2004). × But as an increasing body of evidence suggests, inequality will not simply disappear as a consequence. Rather, discrimination will go digital. And when it does occur, it will likely manifest not as a discrete act of individual intent but instead as a statistically disparate effect.

With this future in view, forecasting the consequences for Title II requires little speculation. Absent the ability to bring statistically-based claims against tomorrow’s data-driven establishments, Title II could be rendered irrelevant. 48 48. In effect, this means that the greatest threat to the statute may not be the doctrinal uncertainty posed by “platform economy businesses,” per se. Instead, it could be the algorithmic “architecture” that drives such companies, regardless of whether they adopt a “platform” business model. ×

If America is to deliver on its guarantee of equal access to public accommodations, its civil rights laws must reach the data-driven delivery services, transportation providers, and logistics operators that increasingly move our society. 49 49. No matter one’s ideological view, the dismantling of legislation through mere technological obsolescence would be a troubling outcome. × Failing to do so simply because these business models were not the norm at the time of the statute’s passage could lead to tragic results. As Oliver Wendell Holmes, Jr. wrote more than a century ago:

“It is revolting to have no better reason for a rule of law than that it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past.” 50 50. See Holmes supra note 1 at 469. ×

To save one of our antidiscrimination canon’s most iconic statutes from such a fate, all signs now suggest it will need a doctrinal update. Title II, in software parlance, must become Title 2.0.

III.       A Policy Roadmap for Title 2.0

With the foregoing analysis in our rearview mirror, it is now possible to explore the road ahead. The policy challenges of applying Title II to a data-driven society appear to be at least threefold. Policymakers should establish: (1) whether Title II cognizes statistically-based claims; (2) what modern entities are covered by Title II; and (3) what oversight mechanisms are necessary to detect discrimination by such entities? The following sections discuss these three challenges, as well as the steps policymakers can take to address them through judicial, legislative, or regulatory reform.

A.         Statistically-based Claims in a Data-Driven Society

The first, and most obvious, policy reform entails simply clarifying Title II’s cognizance of statistically based claims. Such clarification could come at the judicial or regulatory level, as occurred with the FHA. Or it could come at the legislative level, as occurred with Title VII.

Though the question of whether litigants can sustain statistical claims under Title II may seem like an all-or-nothing proposition, recent experience shows this isn’t actually true. Short of directly translating Title VII theories to Title II, there exist numerous alternatives. Justice Kennedy himself noted as much in Inclusive Communities when he remarked that “the Title VII framework may not transfer exactly to [all other] context[s].” 51 51. See Inclusive Communities, supra note 42. ×

Nancy Leong and Aaron Belzer convincingly argue that one framing might involve adopting a modern take on discriminatory intent claims. The scholars assert that even if intent is deemed essential under Title II, statistically based claims could nevertheless satisfy the requirement. 52 52. See Leong & Belzer supra note 14, at 1313. × In their telling, the intent requirement could manifest through a company’s “decision to continue using a platform design or rating system despite having compelling evidence that the system results in racially disparate treatment of customers.” 53 53. See id. × Under this view, the claim would then be distinguishable from unintentional claims because “once the aggregated data is known to reflect bias and result in discrimination,” its continued use would constitute evidence of intent. 54 54. See id. Indeed, this argument may become especially compelling in a world where improved digital analytics enable much more customized targeting of individuals or traits. With more fine-grained control over data-driven algorithms, it may become much more difficult to justify the use of those that appear to perpetuate bias against protected groups. ×

Not only would this approach countenance Kennedy’s admonition in Inclusive Communities “that disparate-impact liability [be] properly limited,” 55 55. See Inclusive Communities, supra note 42. × it may also offer an elegant means of addressing the concerns raised by dissenting opinions that Title II claims demonstrate a defendant’s discriminatory “intent.” 56 56. See, e.g. id. (Justice Alito’s dissent highlighted Title II’s “because of” language). × Policymakers should, therefore, take this line of analysis into consideration when clarifying Title II’s scope.

B.         Public Accommodations in a Data-Driven Society

Although this essay has thus far presumed that large-scale algorithmic transportation services like Uber and Amazon are covered by Title II, even that conclusion remains unclear. As enacted, Title II is actually silent as to whether it covers conventional cabs, much less emerging algorithmic transportation models. 57 57. See, e.g., Bryan Casey, Uber’s Dilemma: How the ADA Could End the On Demand Economy, 12 U. Mass. L. Rev. 124, 134 (citing Ramos v. Uber Techs., Inc., No. SA-14-CA-502-XR, 2015 WL 758087, at *11 (W.D. Tex. Feb. 20, 2015)). × A second policy reform, therefore, would entail clarifying whether Title II actually covers such entities in the first place.

Here, understanding the origins of the Civil Rights Act of 1964 is again useful. The statute lists several examples of public accommodations that were typical of America circa 1960. 58 58. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a(b) (2018). × Some courts have suggested that this list is more or less exhaustive. 59 59. See Leong & Belzer supra note 14, at 1296. × But that view is inconsistent with the law’s own language. 60 60. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a(a) (2018) (prohibiting discrimination in “establishment[s] affecting interstate commerce”). × And numerous others have taken a broader view of the term “public accommodations,” which extends to entities that were not necessarily foreseen by the statute’s original drafters. 61 61. See, e.g., Miller v. Amusement Enters., Inc., 394 F.2d 342, 349 (5th Cir. 1968) (“Title II of the Civil Rights Act is to be liberally construed and broadly read.”). ×

Policymakers in search of analogous interpretations of public accommodations laws need look no further than the Americans With Disabilities Act (ADA). Like Title II, the ADA covers places of public accommodation. And, again like Title II, its drafters listed specific entities as examples—all of which were the types of brick-and-mortar establishments characteristic of the time. But in the decades since its passage, the ADA’s definition has managed to keep pace with our increasingly digital world. Multiple courts have extended the statute’s reach to distinctly digital establishments, including popular websites and video streaming providers. 62 62. See Nat’l Ass’n of the Deaf v. Netfix, Inc., 869 F Supp. 2d 196, 200-02 (D. Mass. 2012) (holding the video streaming service constitutes a “public accommodation” even if it lacks a physical nexus); National Federation of the Blind v. Scribd Inc., 97 F. Supp. 3d 565, 576 (D. Vt. 2015) (holding that an online repository constitute a “public accommodation” for the purpose of the ADA). But see Tara E. Thompson, Comment, Locating Discrimination: Interactive Web Sites as Public Accommodations Under Title II of the Civil Rights Act, 2002 U. Chi. Legal F. 409, 412 (“The courts, however, have not reached a consensus as to under what circumstances ‘non-physical’ establishments can be Title II public accommodations.”); Noah v. AOL Time Warner Inc., 261 F Supp. 2d 532, 543-44 (E.D. Va. 2003) (holding that online chatroom was not a “public accommodation” under Title II). ×

Policymakers should note, however, that Uber and Lyft have fiercely resisted categorization as public accommodations. 63 63. See Casey, supra note 57. The Department of Justice and numerous courts have expressed skepticism of this view. But, to date, there has been no definitive answer to this question—due in part to the tendency of lawsuits against Uber and Lyft to settle in advance of formal rulings. × In response to numerous suits filed against them, the companies have insisted they are merely “platforms” or “marketplaces” connecting sellers and buyers of particular services. 64 64. See id. × As recently as 2015, this defense was at least plausible. And numerous scholars have discussed the doctrinal challenges of applying antidiscrimination laws to these types of businesses. 65 65. See generally id.; Leong & Belzer supra note 14. × But increasingly, companies like Uber, Lyft, and Amazon are shifting away from passive “platform” or “marketplace” models into more active service provider roles. 66 66. See Bryan Casey, A Loophole Large Enough to Drive an Autonomous Vehicle Through: The ADA’s “New Van” Provision and the Future of Access to Transportation, Stan. L. Rev. Online (Dec. 2016), https://www.stanfordlawreview.org/online/loophole-large-enough/ (describing Uber’s and Lyft’s efforts to deploy autonomous taxi fleets). Other platform companies in different sectors are acting similarly. See, e.g., Katie Burke, Airbnb Proposes New Perk For Hosts: A Stake in The Company, San Francisco Bus. Times (Sept. 21, 2018), https://www.bizjournals.com/sanfrancisco/news/2018/09/21/airbnb-hosts-ipo-sec-equity.html. × All three, for example, now deploy transportation services directly. And a slew of similarly situated companies appear poised to replicate this model. 67 67. See Casey, supra note 66(noting the ambitions of Tesla, Google, and a host of others to deploy similar autonomous taxi models). × For most such companies, passive descriptors like “platform” or “marketplace” are no longer applicable. Our laws should categorize them accordingly.

C.         Oversight in a Data-Driven Society

Finally, regulators should consider implementing oversight mechanisms that allow third parties to engage with the data necessary to measure and detect discrimination. In an era of big data and even bigger trade secrets, this is of paramount importance. Because companies retain almost exclusive control over their proprietary software and its resultant data, barriers to accessing the information necessary even to detect algorithmic impacts often can be insurmountable. And the ensuing asymmetries can render discrimination or bias effectively invisible to outsiders.

Another benefit of oversight mechanisms is their ability to promote good corporate governance without the overhead of more intrusive command-and-control regulations. Alongside transparency, after all, comes the potential for extralegal forces such as ethical consumerism, corporate social responsibility, perception bias, and reputational costs to play meaningful roles in checking potentially negative behaviors. 68 68. See Bryan Casey, Amoral Machines; Or, How Roboticists Can Learn to Stop Worrying and Love the Law, 111 Nw. U. L. Rev. Onlineat 1358. There was, for example, a happy ending to the recent revelations regarding racial disparities in Amazon delivery services. See Spencer Soper, Amazon to Fill All Racial Gaps in Same-Day Delivery Service, Bloomberg (May 6, 2016), https://www.bloomberg.com/news/articles/2016-05-06/amazon-to-fill-racial-gaps-in-same-day-delivery-after-complaints. × By pricing externalities through the threat of public or regulatory backlash, these and other market forces can help to regulate sectors undergoing periods of rapid disruption with less risk of chilling innovation than traditional regulation. 69 69. As importantly, this encourages proactive antidiscrimination efforts as opposed to retroactive ones. See Mark Lemley & Bryan Casey, Remedies for Robots, U. Chi. L. Rev. (forthcoming 2019). Without meaningful oversight, the primary risk is not that industry will intentionally build discriminatory systems but that “[biased] effects [will] simply happen, without public understanding or deliberation, led by technology companies and governments that are yet to understand the broader implications of their technologies once they are released into complex social systems.” See Alex Campolo et. al, AI Now 2017 Report (2017). ×

Some scholars have proposed federal reforms—akin to those put forward by the Equal Employment Opportunity Commission, 70 70. 29 C.F.R. § 1602.7 (1991). × the Department of Housing and Urban Development, 71 71. 24 C.F.R. §§ 1.6, 1.8 (1996). × and the Department of Education —as a means of implementing oversight mechanisms for Title II. But state-level action, in this instance, may be more effective. A multi-fronted push that is national in scope provides a higher likelihood of successful reform. And much like the “Brussels Effect” documented at an international level, intra-territorial policies imposed on inter-territorial entities can have extra-territorial effects within the U.S. As the saying goes: “As goes California, so goes the nation.”

As a parting note, it cannot be stressed enough that mere “disclosure” mechanisms are not necessarily enough. For oversight to be meaningful, it must be actionable—or, in Deirdre Mulligan’s phrasing, “contestable.” That is, it must allow downstream users to “contest[] what the ideal really is.” Moreover, if oversight is to be accomplished through specific administrative bodies, policymakers must ensure that those bodies have the technical know how and financial resources available to promote public accountability, transparency, and stakeholder participation. Numerous scholars have explored these concerns at length, and regulators would do well to consider their insights.

Conclusion

Following any major technological disruption, scholars, industry leaders, and policymakers must consider the challenges it poses to our existing systems of governance. Will the technology meld? Must our policies change?

Algorithmic transportation is no exception. This piece examines its implications for one of America’s most iconic statutes: Title II of the Civil Rights Act of 1964. As algorithms expand into a vast array of transportation contexts, they will increasingly test the doctrinal foundations of this canonical law. And without meaningful intervention, Title II could soon find itself at risk of irrelevance.

But unlike policy responses to technological breakthroughs of the past, those we have seen so far offer genuine hope of timely reform. As Ryan Calo notes, unlike a host of other transformative technologies that escaped policymakers’ attention until too late, this new breed “has managed to capture [their] attention early [] in its life-cycle.”

Can this attention be channeled in directions that ensure that our most important civil rights laws keep pace with innovation? That question, it now appears, should be on the forefront of our policy agenda.


Legal Fellow, Center for Automotive Research at Stanford (CARS); Affiliate Scholar of CodeX: The Center for Legal Informatics at Stanford and the Stanford Machine Learning Group. The author particularly thanks Chris Gerdes, Stephen Zoepf, Rabia Belt, and the Center for Automotive Research at Stanford (CARS) for their generous support.

The common story of automated vehicle safety is that by eliminating human error from the driving equation, cars will act more predictably, fewer crashes will occur, and lives will be saved. That future is still uncertain though. Questions still remain about whether CAVs will truly be safer drivers than humans in practice, and for whom they will be safer. In the remainder of this post, I will address this “for whom” question.

A recent study from Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern at Georgia Tech found that state-of-the-art object detection systems – the type used in autonomous vehicles – demonstrate higher error rates in detection of darker-skinned pedestrians as compared to lighter-skinned pedestrians. Controlling for things like time-of-day or obstructed views, the technology was five percentage points less accurate at detecting people with darker skin-tones.

The Georgia Tech study is far from the first report of algorithmic bias. In 2015, Google found itself at the center of controversy when its algorithm for Google Photos incorrectly classified some black people as gorillas. More than two years later, Google’s temporary fix of removing the label “gorilla” from the program entirely was still in place. The company says they are working on a long-term fix to their facial recognition software. However, the continued presence of the temporary solution several years after the initial firestorm is some indication either of the difficulty of achieving a real solution or the lack of any serious coordinated response across the tech industry.

Algorithmic bias is a serious problem that must be tackled with a serious investment of resources across the industry. In the case of autonomous vehicles, the problem could be literally life and death. The potential for bias in automated systems begs for an answer to serious moral and legal questions. If a car is safer overall, but more likely to run over a black or brown pedestrian than a white one, should that car be allowed on the road? What is the safety baseline against which such a vehicle should be judged? Is the standard, “The AV should be just as likely (hopefully not very likely) to hit any given pedestrian?” Or is it “The AV should hit any given pedestrian less often than a human driven vehicle would?” Given our knowledge of algorithmic bias, should an automaker be opened up to more damages if their vehicle hits a black or brown pedestrian than when it hits a white pedestrian? Do tort law claims, like design defect or negligence, provide adequate incentive for automakers to address algorithmic bias in their systems? Or should the government set up a uniform system of regulation and testing around the detection of algorithmic bias in autonomous vehicles and other advanced, potentially dangerous technologies?

These are questions that I cannot answer today. But as the Georgia Tech study and the Google Photos scandal demonstrate, they are questions that the AV industry, government, and society as a whole will need to address in the coming years.

In the coming decades, advancing technology is likely to strain many tried-and-true legal concepts.  The tort law cause of action for design defects is likely to be among the most impacted. This post will explore the current understanding of design defect claims, and highlight areas where autonomous vehicles and other highly complex technologies will likely lead to a rethinking of the doctrine.

As outlined in the Third Restatement of Torts, design defect claims can be brought against a manufacturer when “the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design . . . and the omission of the alternative design renders the product not reasonably safe.” Essentially, plaintiffs who bring a design defect claim after being harmed by a product bear the burden of showing that the product was designed unreasonably for its intended use, and that an alternative design would have been safer for the user and reasonable for the manufacturer to adopt.

Traditionally, courts have adopted one of two tests to determine the reasonableness of a product design under these claims. Under the consumer expectations test, the key question is whether a product performed up to the level at which an ordinary consumer would expect. Because this test is based on the expectations of an ordinary consumer (one who is presumed to not have any special knowledge about the product), a claim can be successful without any expert testimony about the alleged design failure, as in McCabe v. American Honda Motor Corp.

Alternatively, many courts have adopted the risk-utility test for proving a design defect. The risk-utility test is more akin to cost-benefit balancing. The South Carolina Supreme Court in Branham v. Ford Motor Co. noted that the risk-utility test balances “numerous factors . . . including the usefulness and desirability of the product, the cost involved for added safety, the likelihood and potential seriousness of injury, and the obviousness of danger.” As this test requires testimony on cost of the current design and any proposed alternatives, it will require expert testimony and specialized knowledge, unlike the consumer expectations test.

Some have made the case that the consumer expectations test will be inadequate to address claims of design defect in complex technologies such as autonomous vehicles. After all, the argument goes, how could an ordinary consumer possibly have a realistic expectation of how an autonomous vehicle is supposed to perform in a given situation? Any given action by an AV is the result of a series of algorithms that is being constantly updated as the car gathers new information about the world around it. Should a consumer expect the AV to act just as a human would act? Should it be more cautious? Or perhaps even take actions that would seem overly risky for a human driver, because the AV system was certain of what every step in its maneuver would look like going in? How could a human passenger know? If courts are persuaded by these concerns, they will likely need to address them by adopting the more expert-reliant risk-utility test.

On the other hand, some scholars argue that the consumer expectations test is perfectly adequate to handle claims involving advanced technology such as AVs. In a recent article, NYU Law Professor Mark Geistfeld notes that consumers need not understand the intricacies of how a technology works in order to have “well-formed expectations of the product performance.” Under Geistfeld’s approach, a consumer either should have such a well-formed expectation or, in the case where they have yet to develop one, should be warned by the manufacturer or dealer in such a way as to make them aware of the risk they are taking on.

It remains to be seen how design defect claims will be forced to evolve as autonomous vehicles come on the scene. Like many areas of law though, this is a field that will be stressed, and potentially forced to evolve, by the advent of this revolutionary technology.

By David Redl

Cite as: David Redl, The Airwaves Meet the Highways
2019 J. L. & Mob. 32.

I applaud and congratulate the University of Michigan for launching the Journal of Law and Mobility. The timing is perfect. The information superhighway is no longer just a clever metaphor. We are living in an era where internet connectivity is a critical part of making transportation safer and more convenient.

Internet connectivity has powered the U.S. and global economies for years now. In the early stages, dial-up connections enabled users to access a vast store of digital information. As the internet and its usage grew, so did the demand for faster broadband speeds. Finally, wireless networks untethered the power of broadband Internet so consumers could have fast access when and where they want it.

We are now seeing technology advances in the automotive sector begin to better align with what has occurred in the communications space. The possibilities for what this means for human mobility are truly exciting. Challenges abound, however, with questions around the security and safety of self-driving vehicles and how to create the infrastructure and policies needed for vehicle connectivity. While many of these will be sorted out by the market, policy levers will also play a role.

In the late 1990s, the Federal Communications Commission (FCC) agreed to set aside radio frequencies for intelligent transportation systems (ITS), persuaded that emerging advances in communications technologies could be deployed in vehicles to increase safety and help save lives. 72 72. Amendment of Parts 2 and 90 of the Commission’s Rules to Allocate the 5.850-5.925 GHz Band to the Mobile Service for Dedicated Short Range Communications of Intelligent Transportation Services, Report and Order, 14 FCC Rcd. 18221 (Oct. 22, 1999). × Specifically, the FCC allocated the 75 megahertz of spectrum between 5850-5925 MHz (5.9 GHz band) for ITS. 73 73. Id. × The automobile industry’s technological solution was to rely primarily on a reconfiguration of IEEE Wi-Fi standards 74 74. The Working Group for WLAN Standards, IEEE 802.11 Wireless Local Area Networks, http://www.ieee802.org/11/ (last visited Oct. 31, 2018). × suitable for ITS (802.11p) so vehicles could “talk” to one another and to roadside infrastructure. 75 75. Accepted nomenclature for these communications include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or more generally vehicle-to-x (V2X). Other applications include vehicle-to-pedestrian. × The FCC in turn incorporated the Dedicated Short Range Communications (DSRC) standards into its service rules for the 5.9 GHz band. 76 76. Amendment of the Commission’s Rules Regarding Dedicated Short-Range Communication Services in the 5.850-5.925 GHz Band (5.9 GHz Band), 19 FCC Rcd. 2458 (Feb. 10, 2004). ×

The National Telecommunications and Information Administration (NTIA), by statute, is the principal advisor to the President of the United States on information and communications policies, including for the use of radiofrequency spectrum. NTIA also is responsible for managing spectrum use by federal government entities. As such, NTIA seeks to ensure that our national use of spectrum is efficient and effective. Over the past two decades, innovations in wireless technologies and bandwidth capacity have completely changed what is possible in connected vehicle technology. 2G wireless evolved to 3G, and then 4G LTE changed the game for mobile broadband. 5G is in the early stages of deployment. Meanwhile, Wi-Fi not only exploded in usage but in its capability and performance. Many vehicles in the market today are equipped with wireless connectivity for diagnostic, navigation and entertainment purposes. Yet DSRC as a technology remains largely unchanged, notwithstanding recent pledges from proponents to update the standard. 77 77. See IEEE Announces Formation of Two New IEEE 802.11 Study Groups, IEEE Standards Association (June 5, 2018), https://standards.ieee.org/news/2018/ieee_802-11_study_groups.html. × This stasis persists despite the technological leaps of advanced driver assistance systems, enhanced by innovations in vehicular radars, sensors and cameras.

This situation is not new or novel as traditional industries continue to grapple with the pace of technological change in the wireless sector.  In fact, the automotive sector has faced the challenge of wireless technological change before, struggling to adapt to the sunset of the first generation of analog wireless networks.  This leads to the question of whether, as some promise, DSRC effectively broadens a vehicle’s situational awareness to beyond line-of-site as the industry creeps toward autonomous driving – or has innovation simply left DSRC behind? The answer is important to the question of whether it makes sense to continue with DSRC for V2X communications.  Regardless of how the question is answered, we must address who should answer it.

One distinction between V2X communications for safety applications and most other communications standards choices is that a fragmented market could have drastic consequences for its effectiveness, given that vehicles must be able to talk to each other in real time for the entire system to work. This is why the National Highway Transportation Safety Administration (NHTSA) initially proposed a phased-in mandate of DSRC beginning with cars and light trucks. 78 78. See Federal Motor Vehicle Safety Standards; V2V Communications, 82 Fed. Reg. 3854 (Jan. 12, 2017). ×

This question of whether to mandate DSRC has also been complicated by inclusion in 3GPP standards of a cellular solution (C-V2X), first in Release 14 for 4G/LTE, 79 79. Dino Flore, Initial Cellular V2X Standard Complete, 3GPP A Global Initiative (Sept. 26, 2016), http://www.3gpp.org/news-events/3gpp-news/1798-v2x_r14. The updates to the existing cellular standard are to a device-to-device communications interface known as the PC5, the sidelink at the physical layer, for vehicular use cases addressing high speed and high density scenarios. A dedicated band is used only for V2V communications. × and continuing with Release 15 and especially Release 16 for 5G, targeted for completion in December 2019. 80 80. Release 16, 3GPP: A Global Initiative (July 16, 2018), https://www.3gpp.org/release-16. × It raises the legitimate question of whether leveraging the rapid innovation and evolution in wireless communication technology is the right way to ensure automotive safety technology benefits from the rapid pace of technological change, and what role the federal government should play in answering these questions.

Despite the federal government’s legitimate interest in vehicle safety, as is true in most cases I question whether the federal government should substitute its judgment for that of the market. A possible solution that strikes a balance between legitimate safety needs and technological flexibility are federal performance requirements that maintain technological neutrality.

Moreover, because the spectrum environment has changed drastically since the 1990s many are questioning whether protecting this 75 megahertz of mid-band spectrum for ITS use is prudent. The 5.9 GHz band is adjacent to spectrum used for Wi-Fi , which makes it unsurprising that some are calling for access to 5.9 GHz spectrum as a Wi-Fi expansion band. Other still question whether V2V safety communications require protected access to all 75 megahertz. NTIA, the FCC, and the Department of Transportation continue to study the feasibility of whether and how this band might be shared between V2V and Wi-Fi or other unlicensed uses and remain committed to both the goal of increased vehicle safety and the goal of maximum spectrum efficiency.

While I am optimistic that wireless technologies will bring a new level of safety to America’s roadways, a number of other policy and legal issues, including user privacy and cybersecurity, will persist as challenges despite being addressed in current solutions. If we are to see the kind of adoption and reliance on V2X safety applications and realize the systemic improvements in safety they portend, Americans must have trust in the security and reliability of these technologies.

The marriage of communications technology with transportation will help define the 21st century, and potentially produce enormous benefits for consumers. A lot of work remains, however, to ensure we have the right laws, regulations and policy frameworks in place to allow private sector innovation to flourish. This forum can play an important role in moving the dialogue forward.


David Redl is the Assistant Secretary for Communications and Information at the U.S. Department of Commerce, and Administrator of the National Telecommunications and Information Administration.

The European Parliament, the deliberative institution of the European Union which also acts as a legislator in certain circumstances, approved on February 20, 2019 the European Commission’s proposal for a new Regulation on motor vehicle safety. The proposal is now set to move to the next step of the EU legislative process; once enacted, an EU Regulation is directly applicable in the law of the 28 (soon to be 27) member states.

This regulation is noteworthy as it means to pave the way for Level 3 and Level 4 vehicles, by obligating car makers to integrate certain “advanced safety features” in their new cars, such as driver attention warnings, emergency braking and a lane-departure warning system. If many of us are familiar with such features which are already found in many recent cars, one may wonder how this would facilitate the deployment of Level 3 or even Level 4 cars. The intention of the European legislator is not outright obvious, but a more careful reading of the legislative proposal reveals that the aim goes much beyond the safety features themselves: “mandating advanced safety features for vehicles . . .  will help the drivers to gradually get accustomed to the new features and will enhance public trust and acceptance in the transition toward autonomous driving.” Looking further at the proposal reveals that another concern is the changing mobility landscape in general, with “more cyclists and pedestrians [and] an aging society.” Against this backdrop, there is a perceived need for legislation, as road safety metrics have at best stalled, and are even on the decline in certain parts of Europe.

In addition, Advanced Emergency Braking (AEB) systems have been trending at the transnational level, in these early months on 2019. The World Forum for Harmonization of Vehicle Regulations (known as WP.29) has recently put forward a draft resolution on such systems, in view of standardizing them and making them mandatory for the WP.29 members, which includes most Eurasian countries, along with a handful of Asia-Pacific and African countries. While the World Forum is hosted by the United Nations Economic Commission for Europe (UNECE,) a regional commission of the Economic and Social Council (ECOSOC) of the UN, it notably does not include among its members certain UNECE member states such as the United States or Canada, which have so far refused to partake in World Forum. To be sure, the North American absence (along with that of China and India, for example) is not new; they have never partaken in the World Forum’s work since it started its operations in 1958. If the small yellow front corner lights one sees on US cars is not something you will ever see on any car circulating on the roads of a W.29 member state, one may wonder if the level of complexity involved in designing CAV systems will not forcibly push OEMs toward harmonization; it is one thing to live with having to manufacture different types of traffic lights, and it is another one to design and manufacture different CAV systems for different parts of the world.

Yet it is well known that certain North American regulators are not a big fan of such approach. In 2016, the US DoT proudly announced an industry commitment of almost all car makers to implement AEB systems in their cars, with the only requirement that such systems satisfy set safety objectives. If it seems like everyone would agree that limited aims are sometimes the best way to get closer to the ultimate, bigger goal, the regulating style varies. In the end, one must face the fact that by 2020, AEB systems will be harmonized for a substantial part of the global car market, and maybe, will be so in a de facto manner even in North America. And given that the World Forum has received a received a clear mandate from the EU – renewed as recently as May 2018 – to develop a global and comprehensive CAV standard, North American and other Asian governments who have so far declined to join the W.29 might only lose an opportunity to influence the outcome of such CAV standards by sticking to their guns.