Vehicle Behavior

The California DMV recently released several 2019 reports from companies piloting self-driving vehicles in California. Under state law, all companies actively testing autonomous vehicles on California public roads must disclose the number of miles driven and how often human drivers were required to retake control from the autonomous vehicle. Retaking control is known as “disengagement.” The DMV defines disengagements as:

“[D]eactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”

Because of the proprietary nature of autonomous vehicle testing, data is not often publicly released;  this is one of the few areas where progress data is made publicly available. The 60 companies actively testing in California cumulatively traveled 2.88 million miles in 2019. The table below reports the various figures for some of the major testers in California.

Company Vehicles Active in CA Miles Driven in 2019 Engagements Engagements per 1,000 miles Average Miles Between Engagements
Waymo 153 1.45 Million 110 0.076 13,219
GM Cruise 233 831,040 68 0.082 12,221
Apple 66 7,544 64 8.48 118
Lyft 20 42,930 1,667 38.83 26
Aurora ? 13,429 142 10.57 95
Nuro 33 68,762 34 0.494 2,024
Pony.ai 22 174,845 27 0.154 6,493
Baidu 4 108,300 6 0.055 18,181
Tesla 0 0 0 0 0

What these numbers make clear is that there are several contenders who have made significant progress in the autonomous vehicle space, and there are some contenders which are not yet so competitive. Companies like Waymo, GM Cruise, and Baidu (which also tests extensively in China) have made incredible progress in decreasing the frequency at which a driver must engage with an automated vehicle. Others, like Apple, Lyft, and Aurora, while making progress, are nowhere near as sophisticated in avoiding engagements yet. Noticeably Tesla, the manufacturer frequently in the news for its “Autopilot” feature, does not test on public roads in California. The company says it conducts tests via simulation, on private test tracks, public roads around the world, and “shadow-tests” by collecting anonymized data from its customers during normal driving operations.

What these numbers seem to illustrate is that the autonomous vehicle industry is not all on par, as many often believe. It is often said that Henry Ford did not conceive the idea of an automobile; he perfected it. Similarly, companies like Waymo or GM may be the first to perfect autonomous vehicles, and gain an incredible market advantage once they do so. They are striving to be the Ford’s in this space, while others look like they’re still manufacturing carriages. However, despite these impressive numbers from a select few, the companies themselves think these metrics “do[] not provide relevant insights” (per Waymo) and that the idea that they give any “meaningful insight . . . is a myth” (per GM Cruise).

Why are the head and shoulder leaders on these metrics saying that they provide very little indication of progress on the technology? Disengagement reports may not be the best way for these companies to build trust and credibility in their products. They are only transparent in that they provide some data with no detail or context.

I was having a conversation about these disengagement numbers with a colleague* this week, and the topic of driver distraction arose. In the CA tests, the driver is constantly alert. Once these vehicles are in use for the general public, a notification to engage may not be effective if the driver is distracted. One reason these numbers do not provide particularly useful information is that for the metrics to be useful, at least two things must be true:

  • If the vehicle does not indicate it needs to disengage, no technical errors have been made; and
  • The driver is paying attention and can quickly engage when necessary.

In California testing, the drivers behind the vehicle are always alert and ready to take over. They may take over when the vehicle indicates they must, because of a malfunction or poor conditions. The driver can also engage when the vehicle has done something incorrectly, yet does not indicate that the driver needs to take over. This could include veering into a lane or failing to recognize a pedestrian.

One of the allures of autonomous vehicles is that a driver may not need to be 100 percent engaged for the vehicle to function correctly. However, current technology has not yet achieved this  result, as reiterated this past week by the National Transportation Safety Board (NTSB). The NTSB is an independent federal agency, which lacks enforcement power, but makes recommendations which are considered thorough and are taken seriously by policymakers.

The NTSB put forward many findings on Tuesday, February 25th regarding a Tesla crash that killed a California driver in March 2018. (A synopsis of the NTSB report and findings can be found here.) The facts of the crash involved driver of a Tesla in Autopilot mode, which struck a barrier between the highway and a left exit lane. NTSB found that the Tesla briefly lost sight of the lines marking the highway lane, and started to follow the right-most lane marker of the exit lane (because of fading on the highway lines) caused the vehicle to enter the “gore area.” This same action had apparently occurred several times in this exact vehicle, but the driver on previous trips was paying attention and was able to correct the vehicle. This time, the driver was playing a mobile game and did not correct the vehicle, causing the crash. Here was how NTSB presented three of their findings:

The Tesla’s Autopilot lane-keeping assist system steered the sport utility vehicle to the left into the neutral area of the gore, without providing an alert to the driver, due to limitations of the Tesla Autopilot vision system’s processing software to accurately maintain the appropriate lane of travel. (emphasis added)

The driver did not take corrective action when the Tesla’s Autopilot lane-keeping assist system steered the vehicle into the gore area, nor did he take evasive action to avoid the collision with the crash attenuator, most likely due to distraction by a cell phone game application. (emphasis added)

The Tesla Autopilot system did not provide an effective means of monitoring the driver’s level of engagement with the driving task.

Here we see a combined failure of both (1) and (2) presented above, combined with an inability to adequately monitor driver engagement. The vehicle took an action which it assumed to be correct, and thus did not notify the driver to take over. This combined with the driver not paying attention, failing to notice the need to disengage, and resulted in the crash. This tragic accident highlights that the AV industry still has many areas to improve before higher SAE level vehicles are ready for mass adoption. (The ADAS on the Tesla was SAE Level 2)

As I discussed last week, the federal Department of Transportation has taken a rather hands-off approach to regulation of automated vehicles, preferring to issue guidance rather than mandatory regulations. The  National Transportation Safety Board (NTSB) criticized this approach in their Tesla crash findings. The NTSB wrote that there has been “ Insufficient Federal Oversight of Partial Driving Automation Systems.”

The US Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) have taken a nonregulatory approach to automated vehicle safety. NHTSA plans to address the safety of partial driving automation systems through enforcement and a surveillance program that identifies safety-related defect trends in design or performance. This strategy must address the risk of foreseeable misuse of automation and include a forward-looking risk analysis.

Because the NTSB lacks enforcement power, it cannot compel industry actors or other government agencies to take any action. It can only perform investigations and make recommendations. NTSB Chairman Robert Sumwalt had much to say regarding distracted driving, the AV industry, and the lack of government regulations in the hearing on Tuesday, February 25th.

“In this crash we saw an over-reliance on technology, we saw distraction, we saw a lack of policy prohibiting cell phone use while driving, and we saw infrastructure failures, which, when combined, led to this tragic loss,”

“Industry keeps implementing technology in such a way that people can get injured or killed . . . [I]f you own a car with partial automation, you do not own a self-driving car. Don’t pretend that you do.”

“This kind of points out two things to me. These semi-autonomous vehicles can lead drivers to be complacent, highly complacent, about their systems. And it also points out that smartphones manipulating them can be so addictive that people aren’t going to put them down,”

Chairman Sumwalt is right to be frustrated. The DOT and NHTSA have not regulated the AV industry, or ADAS as they should. Tragic accidents like this can be avoided through a variety of solutions; better monitors of driver engagement than torque-sensing steering wheels, lock-out functions for cell-phones when driving, stricter advertising and warning regulation by companies offering ADAS. Progress is being made in the AV industry, and automated vehicles are getting smarter and safer every day. But incidents like this that combine a failure of technology, regulation, and consumer use, do not instill public confidence in this incredible technology that will be beneficial to society. It only highlights how much farther we still have to go.

*I would like to thank Fiona Mulroe for the inspiration to take this approach to the disengagement report

The Uniform Law Commission (“ULC”) is a non-governmental body composed of state-selected lawyers who oversee the preparation of “Uniform Laws” to be proposed to the states for adoption. The group’s most well-known body of law will be familiar to any lawyer or law student who paid attention in first-year contracts: the Uniform Commercial Code (UCC). Not all projects of the ULC are as successful as the UCC. In fact, many are never adopted by any state.

The ULC appointed a Drafting Committee on Highly Automated Vehicles in 2017.  The Committee recently completed an Automated Vehicles Act, titled “The Uniform Automated Operation of Vehicles Act,” which is a “uniform law covering the deployment of automated driving systems (SAE levels 3 through 5).” The Act is intended to cover a vast array of issues likely to be faced by states in the coming decades as autonomous vehicles become more ubiquitous. The ULC description of the Automated Vehicles Act states:

The Uniform Automated Operation of Vehicles Act regulates important aspects of the operation of automated vehicles.  This act covers the deployment of automated vehicles on roads held open to the public by reconciling automated driving with a typical state motor vehicle code.  Many of the act’s sections – including definitions, driver licensing, vehicle registration, equipment, and rules of the road – correspond to, refer to, and can be incorporated into existing sections of a typical vehicle code.  This act also introduces the concept of automated driving providers (ADPs) as a legal entity that must declare itself to the state and designate the automated vehicles for which it will act as the legal driver when the vehicle is in automated operation.  The ADP might be an automated driving system developer, a vehicle manufacturer, a fleet operator, an insurer, or another kind of market participant that has yet to emerge.  Only an automated vehicle that is associated with an ADP may be registered.  In this way, the Automated Operation of Vehicles Act uses the motor vehicle registration framework that already exists in states – and that applies to both conventional and automated vehicles – to incentivize self-identification by ADPs.  By harnessing an existing framework, the act also seeks to respect and empower state motor vehicle agencies.

The final version of the act can be downloaded here.

This Act is a step in the right direction. It does much of the leg-work for state legislatures to exempt autonomous vehicles from a variety of state laws by providing language which can be easily inserted into various state vehicle codes. States can choose to enact certain parts of the Uniform Act, picking and choosing the sections or phrases they want and discarding the rest. This is beneficial because it will likely mean more states will enact some form of AV exemption. However, it also means there could be substantial variation between states that adopt some but not all of the Act. The passage of a Uniform Act by the ULC does not ensure there will be uniform adoption.

The act is not very long, only 28 pages including all the comments and legislative notes. There are many sections that deserve a more extensive dive, but I want to begin with a subsection that relates to a topic I’ve written about before: Platooning. The Act does not include a provision that would legalize platooning, but it does contain a single provision that addresses state laws regarding minimum following distance: Section 9 (h). Section 9 covers “Rules of the Road.” Subsection (h) states:

A provision of [this state’s vehicle code] imposing a minimum following distance other than a reasonable and prudent distance does not apply to the automated operation of an automated vehicle.

The comment to the section clarifies subsection h:

[T]his section provides that a numerical minimal following-distance requirement does not apply to the automated operation of automated vehicles. These numerical minimums may be unnecessarily large for automated vehicles that react faster than human drivers. However, the common “reasonable and prudent” following-distance requirement continues to apply. This bracketed subsection (h) differs in scope from following-distance legislation enacted in some states to facilitate the platooning of vehicles, particularly commercial trucks, that use advanced technologies but may not necessarily qualify as automated vehicles.

As I’ve written about before, platooning vehicles that follow at incredibly close distances could be considered “reasonable and prudent” given the connected nature and quick response times of the technology. If the Uniform Act were adopted in some states, it could present the opportunity to argue that there is, or should be, a reasonable car standard applied to autonomous vehicles. The act also solves the problems of states with 300-500-foot following distance requirements for trucks.

The passage of the Act is exciting for many reasons. It shows that the legal world is taking autonomous vehicles seriously, and is taking fundamental steps to create a legal framework within which these vehicles can operate. It also provides a baseline for states to modify their existing laws to allow autonomous vehicles to be exempted from many requirements that need not apply to autonomous vehicles. For example, there is no need for a steering wheel or gas pedals in an AV. There may be a need for a large touchscreen like in the various Tesla models, which would be distracting in traditional vehicles. The Act will hopefully spark discussions about the proper way to regulate autonomous vehicles at the state level, and may even spark debate over the merits of varied state or uniform federal regulation.

One of the most exciting and economically advantageous aspects of autonomous vehicle technology is the ability for cars and heavy trucks to “platoon.” Platooning is a driver-assist technology that allows vehicles to travel in tandem, maintaining a close, constant distance. Imagine trucks are racers in a bicycle or foot race. By drafting closely behind one another, the vehicles reduce their energy (fuel) consumption.

I personally find that large-scale platooning should be the ultimate goal of autonomous vehicle technology; the potential time and fuel savings would be enormous if the highways were filled with vehicles drafting behind one another. Imagine a highway system without rubberneckers, the guy on the highway that floors it, and then slams on the breaks during rush hour, or the “Phantom Traffic Jam.” Imagine an organized “train” of cars and trucks instead, following at a close, but technologically safe distance (between 25 and 75-feet) and at a uniform speed.

This future is more likely to begin on a smaller scale, and in the commercial shipping sector, rather than in the consumer vehicle market. The work has already started with some platooning pilot programs involving heavy trucks.

These programs employ short-range communications technology and advanced driver assistance systems in their testing. The technology creates a seamless interface supporting synchronized actions; however, drivers are still needed to steer and monitor the system. When done with heavy commercial trucks — tractor-trailers, 18-wheelers, or semi-trucks (depending on what area of the country you live in) — the trucks are “coupled” through vehicle-to-vehicle (V2V) communication. The V2V technology allows the vehicles to synchronize acceleration, deceleration, and braking to increase efficiency and safety.

The economic incentives for platooning in the freight industry derive from the potential fuel savings, which come from reductions to aerodynamic drag. While both vehicles in a pair of platooning trucks save fuel, the rear vehicle typically saves significantly more. Tests conducted by the National Renewable Energy Laboratory demonstrated average fuel savings up to 6.4 percent for a pair of platooning trucks: a lower amount (up to 5.3 percent) for the lead truck and a higher amount (up to 9.7 percent) for the trailing truck. These numbers varied based on the size of the gap between the two trucks, and the driving speed. The ability to decrease fuel consumption in heavy freight vehicles represents an enormous area to reduce the cost of shipping.

Fuel costs account for roughly one-third of the trucking industries’ cost per mile; a typical heavy-duty freight vehicle incurs between $70,000 and $125,000 in fuel costs each year. Vehicles that reduce their fuel consumption by 6.4 percent would save $4,500 to $8,000 per year. These savings are potentially enormous when extrapolated across the more than 2-million tractor-trailers on the road. The ability to decrease shipping and transportation costs should be a substantial incentive for large shipping companies like Fed Ex, UPS, and Amazon. 

While getting the significant players in the transportation industry is crucial, an estimated 90% of the trucking companies in the U.S. are made up of fleets with six trucks or less, and 97% have fewer than 20. Converting existing truck cabs with the necessary technology could pose a substantial hardship for these small businesses. However, it is projected that owner-operators would recoup their investment in 10-months, and fleet operators would recoup theirs in 18-months. This relatively short period could incentivize even small-scale operators to invest in the technology.

Platooning technology could also help offset the recent spike in the average cost of truck operations. Most of these costs came from increases in driver wages and benefits, likely due to a shortage of long-haul truck drivers. The shortage of drivers is only expected to grow; the combination of long hours, inconsistent schedules, long stretches of solitude, and low pay have increased the turnover rate and disincentivized new drivers from entering the labor market. While the technology is not yet poised to run without drivers, a single truck driver would one day lead a platoon train of autonomous trucks, decreasing the need for drivers in every cab.

My vision of a highway filled with platooning vehicles may not be feasible yet, but with proper investment by businesses, platooning technology could become viable, and cost-effective, within a few years.

I previously blogged on automated emergency braking (AEB) standardization taking place at the World Forum for Harmonization of Vehicle Regulations (also known as WP.29), a UN working group tasked with managing a few international conventions on the topic, including the 1958 Agreement on wheeled vehicles standards.

It turns out the World Forum recently published the result of a joint effort undertaken by the EU, US, China, and Japan regarding AV safety. Titled Revised Framework document on automated/autonomous vehicles, its purpose is to “provide guidance” regarding “key principles” of AV safety, in addition to setting the agenda for the various subcommittees of the Forum.

One may first wonder what China and the US are doing there, as they are not members to the 1958 Agreement. It turns out that participation in the World Forum is open to everyone (at the UN), regardless of membership in the Agreement. China and the US are thus given the opportunity to influence the adoption of one standard over the other through participation in the Forum and its sub-working groups, without being bound if the outcome is not to their liking in the end. Peachy!

International lawyers know that every word counts, and every word can be assumed to have been negotiated down to the comma, or so it is safe to assume. Using that kind of close textual analysis, what stands out in this otherwise terse UN prose? First, the only sentence couched in mandatory terms. Setting out the drafters’ “safety vision,” it goes as follows: AVs “shall not cause any non-tolerable risk, meaning . . . shall not cause any traffic accidents resulting in injury or death that are reasonably foreseeable and preventable.”

This sets the bar very high in terms of AV behavioral standard, markedly higher than for human drivers. We cause plenty of accidents which would be “reasonably foreseeable and preventable.” A large part of accidents are probably the result of human error, distraction, or recklessness, all things “foreseeable” and “preventable.” Nevertheless, we are allowed to drive and are insurable (except in the most egregious cases…) Whether this is a good standard for AVs can be discussed, but what is certain is that it reflects the general idea that we as humans hold machines to a much higher “standard of behavior” than other humans; we forgive other humans for their mistakes, but machines ought to be perfect – or almost so.

In second position: AVs “should ensure compliance with road traffic regulations.” This is striking by its simplicity, and I suppose that the whole discussion on how the law and its enforcement are actually rather flexible (such as the kind of discussion this very journal hosted last year in Ann Arbor) has not reached Geneva yet. As it can be seen in the report on this conference, one cannot just ask AVs to “comply” with the law; there is much more to it.

In third position: AV’s “should allow interaction with the other road users (e.g. by means of external human machine interface on operational status of the vehicle, etc.)” Hold on! Turns out this was a topic at last year’s Problem-Solving Initiative hosted by University of Michigan Law School, and we concluded that this was actually a bad idea. Why? First, people need to understand whatever “message” is sent by such an interface. Language may come in the way. Then, the word interaction suggests some form of control by the other road user. Think of a hand signal to get the right of way from an AV; living in a college town, it is not difficult to imagine how would such “responsive” AVs could wreak havoc in areas with plenty of “other road users,” on their feet or zipping around on scooters… Our conclusion was that the AV could send simple light signals to indicate its systems have “noticed” a crossing pedestrian for example, without any additional control mechanisms begin given to the pedestrian. Obviously, jaywalking in front on an AV would still result in the AV breaking… and maybe sending angry light signals or honking just like a human driver would do.

Finally: cybersecurity and system updates. Oof! Cybersecurity issues of IoT devices is an evergreen source of memes and mockery, windows to a quirky dystopian future where software updates (or lack thereof) would prevent one from turning the lights on, flushing the toilet, or getting out of the house… or where a botnet of connected wine bottles sends DDoS attacks across the web’s vast expanse. What about a software update while getting on a crowded highway from an entry ramp? In that regard, the language of those sections seems rather meek, simply quoting the need for respecting “established” cybersecurity “best practices” and ensuring system updates “in a safe and secured way…” I don’t know what cybersecurity best practices are, but looking at the constant stream of IT industry leaders caught in various cybersecurity scandals, I have some doubts. If there is one area where actual standards are badly needed, it is in consumer-facing connected objects.

All in all, is this just yet another useless piece of paper produced by an equally useless international organization? If one is looking for raw power, probably. But there is more to it: the interest of such a document is that it reflects the lowest common denominator among countries with diverging interests. The fact that they agree on something, (or maybe nothing) can be a vital piece of information. If I were an OEM or policy maker, it is certainly something I would be monitoring with due care.

“Safety.” A single word that goes hand-in-hand (and rhymes!) with CAV. If much has been said and written about CAV safety already (including on this very blog, here and there,) two things are certain: while human drivers seem relatively safe – when considering the number of fatalities per mile driven – there are still too many accidents, and increasingly more of them. 

The traditional approach to safely deploying CAVs has been to make them drive, drive so many miles, and with so few accidents and “disengagements,” that the regulator (and the public) would consider them safe enough. Or even safer than us!  

Is that the right way? One can question where CAVs are being driven. If all animals were once equal, not every mile can be equally driven. All drivers know that a mile on a straight, well-maintained road by a fine sunny day is not the same as a mile drive on the proverbially mediocre Michigan roads during a bout of freezing rain. The economics are clear; the investments in AV technology will only turn a profit through mass deployment. Running a few demos and prototypes in Las Vegas won’t cut it; CAVs need to be ready to tackle the diversity of weather patterns we find throughout the world beyond the confines of the US South-West.

Beyond the location, there is the additional question of whether such “testing” method is the right one in the first place. Many are challenging what appears to be the dominant approach, most recently during this summer’s Automated Vehicle Symposium. Their suggestion: proper comparison and concrete test scenarios. For example, rather than simply aiming for the least amount of accidents per 1000’s of miles driven, one can measure break speed at 35mph, in low-visibility and wet conditions, when a pedestrian appears 10 yards in front of the vehicle. In such a scenario, human drivers can meaningfully be compared to software ones. Furthermore, on that basis, all industry players could come together to develop a safety checklist which any CAV must be able to pass before hitting the road. 

Developing a coherent (and standardized?) approach to safety testing should be at the top of the agenda, with a looming push in Congress to get the AV bill rolling. While there are indications that the industry might not be expecting much from the federal government, this bill still has the possibility of allowing CAVs on the road without standardized safety tests, which could result in dire consequences for the industry and its risk-seeking members. Not to mention that a high-risk business environment squeezes out players with shallower pockets (and possibly innovation) and puts all road users, especially those without the benefit of a metal rig around them, at physical and financial risk were an accident to materialize. Signs of moderation, such as Cruise postponing the launch of its flagship product, allows one to be cautiously hopeful that “go fast and break things” mentality will not take hold in the automated driving industry.

*Correction 9/9/19 – A correction was made regarding the membership to 1958 Agreement and participation at the World Forum.

All the way back in December, I wrote about how various companies, including Amazon (in partnership with Toyota), Postmates, Domino’s and Kroger were all working on using CAVs and drones to deliver goods to consumers. Since then there have been a number of news stories on similar projects across the globe, which deserve some attention, as you’ll see in this, the first of three posts:

On the Ground

In my December post I talked about Postmates’ testing of delivery robots that could bring products directly to your door. This winter similar ‘bots were deployed on the campuses of the University of the Pacific (sponsored by PepsiCo), and George Mason University (via start-up Starship Technologies and food-services giant Sodexo). College campuses, which tend to feature greater walkability and an always snack-craving populace, seem to be the perfect testing ground for such systems. And the robots seem to have made a difference in the eating habits, at least at George Mason – with an additional 1,500 breakfast orders being delivered via robot. This may be due to the fact the robots were integrated into the campus meal plan, meaning students weren’t just able to order snacks, but could order full meals and pay for them via their meal plan.  

While these delivery services may be seen as saviors to hung-over college students in need of a bacon, egg, and cheese sandwich, the expansion of such programs does raise issues. Just as ridesharing has changed the way cities have to manage curb space, delivery ‘bots raise questions of sidewalk management. Just how much of public space should we cede to commercial use? How will the ‘bots be programmed to “share the road” with pedestrians. Of course, that may not be as big of an issue in more sprawling American cites that don’t have the same density of foot traffic. They’ll also have to content with being messed with by humans, as was the case in this video, where a ‘bot’s cameras were intentionally covered in snow (there is a happy ending, as seen in the footage – after a good Samaritan cleaned off  the camera the ‘bot continues on its way, after saying “thank you!” to its’ human helper). In an attempt to get ahead of these issues San Francisco banned sidewalk delivery ‘bots in 2017, and has only slowly opened up room for testing. Will other cities follow suit? Or will they open the floodgates? Currently, the California DMV is considering new rules on delivery ‘bots and car-sized autonomous delivery vehicles, so look for a follow-up blog once those are out.

Given my continued interest in data collection and privacy, (an interest echoed in more recent blog posts by Kevin – available here, here, and here) I’d be remiss to not flag those issues here. (those issues also come up in the context of aerial deliveries, discussed in our next post). Not only would sidewalk based delivery ‘bots collect data on the items you order and when, they could potentially collect data about your home or its surrounding environment (think back to when Google was caught collecting wi-fi data with its’ Street View cars).

In our next post – aerial delivery drones!

With roughly a clip a month – most of these being corporate fluff – Waymo’s YouTube channel is not the most exciting nor informative one. At least, those (like me) who keep looking for clues about Waymo’s whereabouts should not expect anything to come out of there.

That was until February 20th, when Waymo low-key published a 15 second clip of their car in action – the main screen showing a rendering of what the car “sees” and the corner thumbnail showing the view from the dash cam. The key point: Waymo’s car apparently crosses a broken-lights, police-controlled intersection without any hurdle. Amazing! Should we conclude that level 5 is at our very doorsteps?

The car and tech press was quick to spot this one, and reports were mostly praise. Yet Brad Templeton, in his piece for Forbes pinpoints at a few things that the clip does not say. First, we have the fact that Waymo operates in a geographically-enclosed area, where the streets, sidewalk and other hard infrastructure (lights, signs, and probably lines) are pre-mapped and already loaded in the algorithm. In other words, Waymo’s car does not discover stuff as it cruises along the streets of Northern California. Moreover, the street lights here do not work and so technically, this is just another four-way stop-signed intersection, with the difference that it is rather busy and there is a traffic police directing traffic in the middle. Finally, the car just goes straight, which is by far the easiest option (no left turn, for example…)

Beyond that, what Waymo alleges and wants us to see, is that car “recognizes” the policeman, or at the very least, recognizes that there is something person-shaped standing in the middle of the intersection and making certain gestures at the car, and that the car’s sensors and Waymo’s algorithms are now at the level of being able to understand hand signals of law enforcement officers.

Now I heard, less than a year ago, the CEO of a major player in the industry assert that such a thing was impossible – in reference to CAVs being able to detect and correctly interpret hand signals cyclists sometime use. It seems that a few months later, we’re there. Or are we? One issue which flew more or less under the radar, is how exactly does the car recognize the LEO here? Would a random passerby playing traffic cop have the same effect? If so, is that what we want?

As a member of the “Connected and Automated Vehicles: Preparing for a Mixed Fleet Future” Problem Solving Initiative class held at the University of Michigan Law School last semester, my team and I have had the opportunity to think about just that – how to make sure that road interactions stay as close as possible as they are today – and conversely how to foreclose awkward interactions or possible abuses that “new ways to communicate” would add. Should a simple hand motion be able to “command” a CAV? While such a question cuts across many domains, our perspective was a mostly legal one and our conclusion was that any new signal that CAV technology enables (from the perspective of pedestrians and other road users) should be non-mandatory and limited to enabling mutual understanding of intentions without affecting the behavior of the CAV. Now what we see in this video is the opposite; seemingly, the traffic police person is not equipped with special beacons that broadcast some form of “law enforcement” signal, and it is implied – although, unconfirmed – that there is no human intervention. We are left awed, maybe, but reassured? Maybe not.

The takeaway may be just this: the issues raised by this video are real ones, and are issues Waymo, and others, will at some point have to address publicly. Secrecy may be good for business, but only so much. Engagement by key industry players is of the highest importance, if we want to foster trust and avoid having the CAV technology crash land in our societies.

The “Trolley Problem” has been buzzing around for a while now, so much that it became the subject of large empirical studies which aimed at finding a solution to it that be as close to “our values” as possible, as more casually the subject of an episode of The Good Place.

Could it be, however, that the trolley problem isn’t one? In a recent article, the EU Observer, an investigative not-for-profit outlet based in Brussels, slashed at the European Commission for its “tunnel vision” with regards to CAVs and how it seems to embrace the benefits of this technological and social change without an ounce of doubt or skepticism. While there are certainly things to be worried about when it comes to CAV deployment (see previous posts from this very blog by fellow bloggers here and here) the famed trolley might not be one of those.

The trolley problem seeks to illustrate one of the choices that a self-driving algorithm must – allegedly – make. Faced with a situation where the only alternative to kill is to kill, the trolley problem asks the question of who is to be killed: the young? The old? The pedestrian? The foreigner? Those who put forward the trolley problem usually do so in order to show that as humans, we are forced with morally untenable alternative when coding algorithms, like deciding who is to be saved in an unavoidable crash.

The trolley problem is not a problem, however, because it makes a number of assumptions – too many. The result is a hypothetical scenario which is simple, almost elegant, but mostly blatantly wrong. One such assumption is the rails. Not necessarily the physical ones, like those of actual trolleys, but the ones on which the whole problem is cast. CAVs are not on rails, in any sense of the word, and their algorithms will include the opportunity to go “off-rails” when needed – like get on the shoulder or on the sidewalk. The rules of the road incorporate a certain amount of flexibility already, and such flexibilities will be built in the algorithm.

Moreover, the very purpose of the constant sensor input processed by the driving algorithm is precisely to avoid putting the CAV in such a situation where the only options that remain are collision or collision.

But what if? What if a collision is truly unavoidable? Even then, it is highly misleading to portray CAV algorithm design as a job where one has to incorporate a piece of code specific to every single decision to be made in the course of driving. The CAV will never be faced with an input of the type we all-too-often present the trolley problem: go left and kill this old woman, go right and kill this baby. The driving algorithm will certainly not understand the situation as one where it would kill someone; it may understand that a collision is imminent and that multiple paths are closed. What would it do, then? Break, I guess, and steer to try to avoid a collision, like the rest of us would do.

Maybe what the trolley problem truly reveals is the idea that we are uneasy with automated cars causing accidents – that is, they being machines, we are much more comfortable with the idea that they will be perfect and will be coded so that no accident may ever happen. If, as a first milestone, CAVs are as safe as human drivers, that would certainly be a great scientific achievement. I do recognize however that it might not be enough for the public perception, but that speaks more of our relationship to machines than to any truth behind the murderous trolley. All in all, it is unfortunate that such a problem continues to keep brains busy while there are more tangible problems (such as what to do with all those batteries) which deserve research, media attention and political action.

The common story of automated vehicle safety is that by eliminating human error from the driving equation, cars will act more predictably, fewer crashes will occur, and lives will be saved. That future is still uncertain though. Questions still remain about whether CAVs will truly be safer drivers than humans in practice, and for whom they will be safer. In the remainder of this post, I will address this “for whom” question.

A recent study from Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern at Georgia Tech found that state-of-the-art object detection systems – the type used in autonomous vehicles – demonstrate higher error rates in detection of darker-skinned pedestrians as compared to lighter-skinned pedestrians. Controlling for things like time-of-day or obstructed views, the technology was five percentage points less accurate at detecting people with darker skin-tones.

The Georgia Tech study is far from the first report of algorithmic bias. In 2015, Google found itself at the center of controversy when its algorithm for Google Photos incorrectly classified some black people as gorillas. More than two years later, Google’s temporary fix of removing the label “gorilla” from the program entirely was still in place. The company says they are working on a long-term fix to their facial recognition software. However, the continued presence of the temporary solution several years after the initial firestorm is some indication either of the difficulty of achieving a real solution or the lack of any serious coordinated response across the tech industry.

Algorithmic bias is a serious problem that must be tackled with a serious investment of resources across the industry. In the case of autonomous vehicles, the problem could be literally life and death. The potential for bias in automated systems begs for an answer to serious moral and legal questions. If a car is safer overall, but more likely to run over a black or brown pedestrian than a white one, should that car be allowed on the road? What is the safety baseline against which such a vehicle should be judged? Is the standard, “The AV should be just as likely (hopefully not very likely) to hit any given pedestrian?” Or is it “The AV should hit any given pedestrian less often than a human driven vehicle would?” Given our knowledge of algorithmic bias, should an automaker be opened up to more damages if their vehicle hits a black or brown pedestrian than when it hits a white pedestrian? Do tort law claims, like design defect or negligence, provide adequate incentive for automakers to address algorithmic bias in their systems? Or should the government set up a uniform system of regulation and testing around the detection of algorithmic bias in autonomous vehicles and other advanced, potentially dangerous technologies?

These are questions that I cannot answer today. But as the Georgia Tech study and the Google Photos scandal demonstrate, they are questions that the AV industry, government, and society as a whole will need to address in the coming years.

As per my last post, our law school problem solving class is looking at problems created by the interaction between connected and automated vehicles and other roadway users. This article from The Information offers some interesting insights on the difficulties Waymo is facing as it deploys its robo-taxi service in Phoenix.  Basically, the problem comes down to . . . people.  A blurb from the article:

The biggest issue for Waymo’s vans and other companies’ prototypes is human drivers or pedestrians who fail to observe traffic laws. They do so by speeding, by not coming to complete stops, by turning illegally, texting while driving, or with an endless array of other moving violations that have become an accepted part of driving. Waymo’s prototypes sometimes respond to these maneuvers by stopping abruptly in ways that human drivers don’t anticipate. As a result, human drivers from time to time have rear-ended the Waymo vans.

This fall, the University of Michigan Law School is offering its third Problem Solving Initiative (“PSI”) course concerning connected and automated vehicles. The first class, offered in the Winter 2017 semester, involved a team of fifteen graduate students from law, business, engineering, and public policy who accepted the challenge of coming up with commercial use cases for data generated by connected vehicles using dedicated short-range communication (“DSRC”) technology.

In the Fall of 2017, we offered our second PSI Course in CAV—this one to 23 graduate students. That course focused on the problem of Level 3 autonomy, as defined by the Society of Automotive Engineers (“SAE”). Level 3 autonomy, or conditional automation, is defined as a vehicle driving itself in a defined operational design domain (“ODD”), with a human driver always on standby to take over the vehicle upon short notice when the vehicle exits the ODD. As with the first course, our student teams spent the semester collecting information from industry, governmental, and academic experts and proposing a series of innovative solutions to various obstacles to the deployment of Level 3 systems.

This semester, our PSI course is entitled Connected and Automated Vehicles: Preparing for a Mixed Fleet Future. I will be co-teaching the course with Anuj Pradhan and Bryant Walker Smith. Our focus will be on the multiple potential problems created by unavoidable future interactions between automated vehicles and other road users, such as non-automated, human-driven vehicles, pedestrians, and bicyclists.

Although cars can be programmed to follow rules of the road, at its core, driving and roadway use are social activities. Roadway users rely heavily on social cues, expectations, and understandings to navigate shared transportation infrastructure. For example, although traffic circles are in principle governed by a simple rule of priority to vehicles already in the circle, their actual navigation tends to governed by a complex set of social interactions involving perceptions of the intentions, speed, and aggressivity of other vehicles. Similarly, while most states require bicyclists to obey stop signs and traffic lights, most cyclists do not; prudent drivers should not expect them to.

Can cars be programmed to behave “socially?” Should they be, or is the advent of robotic driving an opportunity to shift norms and expectations toward a greater degree of adherence to roadway rules? Will programming vehicles to be strictly rule compliant make CAVs “roadway wimps,” always giving in to more aggressive roadway users? Would that kill the acceptance of CAVs from a business perspective? Is reform legislation required to permit CAVs to mimic human drivers?

More generally, is the advent of CAVs an opportunity to reshape the way that all roadway users access roadways? For example, could the introduction of automated vehicles be an opportunity to reduce urban speeds? Or to prohibit larger private vehicles from some streets (since people may no longer be dependent only on their individually owned car)? These questions are simply illustrative of the sorts of problems our class may choose to tackle. Working in interdisciplinary groups, our graduate students will attempt to identify and solve the key legal, regulatory, technological, business, and social problems created by the interaction between CAVs and other roadway users.

As always, our class will rely heavily on on the expertise of folks from government, industry, and academia. We welcome any suggestions for topics we should consider or experts who might provide important insights as our students begin their discovery process next week.

Cite as: Daniel A. Crane, The Future of Law and Mobility, 2018 J. L. & Mob. 1.

Introduction

With the launch of the new Journal of Law and Mobility, the University of Michigan is recognizing the transformative impact of new transportation and mobility technologies, from cars, to trucks, to pedestrians, to drones. The coming transition towards intelligent, automated, and connected mobility systems will transform not only the way people and goods move about, but also the way human safety, privacy, and security are protected, cities are organized, machines and people are connected, and the public and private spheres are defined.

Law will be at the center of these transformations, as it always is. There has already been a good deal of thinking about the ways that law must adapt to make connected and automated mobility feasible in areas like tort liability, insurance, federal preemption, and data privacy. 1 1. See, e.g., Daniel A. Crane, Kyle D. Logue & Bryce Pilz, A Survey of Legal Issues Arising from the Deployment of Autonomous and Connected Vehicles, 23 Mich. Tel. & Tech. L. Rev. 191 (2017). × But it is also not too early to begin pondering the many implications for law and regulation arising from the technology’s spillover effects as it begins to permeate society. For better or worse, connected and automated mobility will disrupt legal practices and concepts in a variety of ways additional to the obvious “regulation of the car.” Policing practices and Fourth Amendment law, now so heavily centered on routine automobile stops, will of necessity require reconsideration. Notions of ownership of physical property (i.e., an automobile) and data (i.e., accident records) will be challenged by the automated sharing economy. And the economic and regulatory structure of the transportation network will have to be reconsidered as mobility transitions from a largely individualistic model of drivers in their own cars pursuing their own ends within the confines of general rules of the road to a model in which shared and interconnected vehicles make collective decisions to optimize the system’s performance. In these and many other ways, the coming mobility revolution will challenge existing legal concepts and practices with implications far beyond the “cool new gadget of driverless cars.”

Despite the great importance of the coming mobility revolution, the case for a field of study in “law and mobility” is not obvious. In this inaugural essay for the Journal of Law and Mobility, I shall endeavor briefly to make that case.

I. Driverless Cars and the Law of the Horse

A technological phenomenon can be tremendously important to society without necessarily meriting its own field of legal study because of what Judge Frank Easterbrook has described as “the law of the horse” problem. 2 2. Frank H.Easterbrook,Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 207, 207-16. × Writing against the burgeoning field of “Internet law” in the early 1990s, Easterbrook argued against organizing legal analysis around particular technologies:

The best way to learn the law applicable to specialized endeavors is to study general rules. Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on “The Law of the Horse” is doomed to be shallow and to miss unifying principles. 3 3. Id. ×

Prominent advocates of “Internet law” as a field rebutted Easterbrook’s concern, arguing that focusing on cyberlaw as a field could be productive to understanding aspects of this important human endeavor in ways that merely studying general principles might miss. 4 4. Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). × Despite Easterbrook’s protestation, a distinct field of cyberlaw has grown up in recent decades.

“The law of the horse” debate seems particularly apt to the question of law and mobility since the automobile is the lineal successor of the horse as society’s key transportation technology. Without attempting to offer a general solution to the “law of the horse” question, it is worth drawing a distinction between two different kinds of disruptive technologies—those in which the technological change produces social changes indirectly and without significant possibilities for legal intervention, and those in which law is central to the formation of the technology itself.

An example of the first species of technological change is air conditioning. The rise of air conditioning in the mid-twentieth century had tremendous effects on society, including dramatic increases in business productivity, changes in living patterns as people shifted indoors, and the extension of retail store hours and hence the growing commercialization of American culture. 5 5. Stan Cox, Losing Our Cool: Uncomfortable Truths About Our Air-Conditioned World (and Finding New Ways to Get Through the Summer) (2012). × The South’s share of U.S. population was in steady decline until the 1960s when, in lockstep with the growth of air conditioning and people’s willingness to settle in hot places, the trend abruptly reversed and the South’s share grew dramatically. 6 6. Paul Krugman, Air Conditioning and the Rise of the South, New York Times March 28, 2015. × The political consequences were enormous—from Richard Nixon through George W. Bush, every elected President hailed from warm climates.

One could say, without exaggeration, that the Willis Carrier’s frigid contraption exerted a greater effect on American business, culture, and politics than almost any other invention in the twentieth century. And, yet, it would seem silly to launch a field of study in “law and air conditioning.” Air conditioning’s social, economic, and political effects were largely indirect—the result of human decisions in response to the new circumstances created by the new technology rather than an immediate consequence of the technology itself. Even if regulators had foreseen the dramatic demographic effects of air conditioning’s spread, there is little they could have done (short of killing or limiting the technology) to mediate the process of change by regulating the technology.

Contrast the Internet. Like air conditioning, the Internet has had tremendous implications for culture, business, and politics, but unlike air conditioning, many of these effects were artifacts of design decisions regarding the legal architecture of cyberspace. From questions of taxation of online commercial transactions, 7 7. See, e.g., John E. Sununu, The Taxation of Internet Commerce, 39 Harv. J. Leg. 325 (2002). × to circumvention of digital rights management technologies, 8 8. See, e.g., David Nimmer, A Rif on Fair Use in the Digital Millenium Copyright Act, 148 U. Pa. L. Rev. 673 (2000). × to personal jurisdiction over geographically remote online interlocutors, 9 9. Note, No Bad Puns: A Different Approach to the Problem of Personal Jurisdiction and the Internet, 116 Harv. L. Rev. 1821 (2003). × and in countless other ways, a complex of legal and regulatory decisions created the modern Internet. From the beginning, law was hovering over the face of cyberspace. Al Gore may not have created the Internet, but lawyers had as much to do with it as did engineers.

The Internet’s legal architecture was not established at a single point in time, by a single set of actors, or with a single set of ideological commitments or policy considerations. Copyright structures were born of the contestation among one set of stakeholders, which was distinct from the sets of stakeholders contesting over tax policy, net neutrality, or revenge porn. And yet, the decisions made in separate regulatory spheres often interact in underappreciated ways to lend the Internet its social and economic character. Tax policy made Amazon dominant in retail, copyright policy made Google dominant in search, and data protection law (or its absence) made Facebook dominant in social media—with the result that all three have become antitrust problems.

Whether or not law students should be encouraged to study “Internet law” in a discrete course, it seems evident with the benefit of thirty years of hindsight that the role of law in mediating cyberspace cannot be adequately comprehended without a systemic inquiry. Mobility, I would argue, will be much the same. While the individual components of the coming shift toward connectivity and automation—i.e., insurance, tort liability, indemnification, intellectual property, federal preemption, municipal traffic law, etc.—will have analogues in known circumstances and hence will benefit from consideration as general questions of insurance, torts, and so forth, the interaction of the many moving parts will produce a novel, complex ecosystem. Given the potential of that ecosystem to transform human life in many significant ways, it is well worth investing some effort in studying “law and mobility” as a comprehensive field.

II. An Illustration from Three Connected Topics

It would be foolish to attempt a description of mobility’s future legal architecture at this early stage in the mobility revolution. However, in an effort to provide some further motivation for the field of “law and mobility,” let me offer an illustration from three areas in which legal practices and doctrines may be affected in complex ways by the shift toward connected and automated vehicles. Although these three topics entail consideration of separate fields of law, the technological and legal decisions made with respect to them could well have system-wide implications, which shows the value of keeping the entire system in perspective as discrete problems are addressed.

A. Policing and Public Security

For better or for worse, the advent of automated vehicles will redefine the way that policing and law enforcement are conducted. Routine traffic stops are fraught, but potentially strategically significant, moments for police-citizen interactions. Half of all citizen-police interactions, 10 10. Samuel Walker, Science and Politics in Police Research: Reflections on their Tangled Relationship, 593 Annals Am. Acad. Pol. & Soc. Sci. 137, 142 (2004); ATTHEW R. DUROSE ET. AL., U.S. DEP’T OF JUSTICE, OFFICE OF JUSTICE PROGRAMS, BUREAU OF JUSTICE STATISTICS, CONTACTS BETWEEN POLICE AND THE PUBLIC, 2005, 1 (2007). × more than forty percent of all drug arrests, 11 11. David A. Sklansky,Traffic Stops, Minority Motorists, and the Future of the Fourth Amendment, 1997SUP. CT. REV. 271, 299. × and over 30% of police shootings 12 12. Adams v. Williams, 407 U.S. 143, 148 n.3 (1972). × occur in the context of traffic stops. Much of the social tension over racial profiling and enforcement inequality has arisen in the context of police practices with respect to minority motorists. 13 13. Ronnie A. Dunn, Racial Profiling: A Persistent Civil Rights Challenge Even in the Twenty-First Century, 66 Case W. Res. L. Rev. 957, 979 (2016) (reporting statistics on disproportionate effects on racial minorities of routine traffic stops). × The traffic stop is central to modern policing, including both its successes and pathologies.

Will there continue to be routine police stops in a world of automated vehicles? Surely traffic stops will not disappear altogether, since driverless cars may still have broken taillights or lapsed registrations. 14 14. See John Frank Weaver, Robot, Do You Know Why I Stopped You?. × But with the advent of cars programmed to follow the rules of the road, the number of occasions for the police to stop cars will decline significantly. As a general matter, the police need probable cause to stop a vehicle on a roadway. 15 15. Whren v. U.S., 517 U.S. 806 (1996). × A world of predominantly automated vehicles will mean many fewer traffic violations and hence many fewer police stops and many fewer police-citizen interactions and arrests for evidence of crime discovered during those stops.

On the positive side, that could mean a significant reduction in some of the abuses and racial tensions around policing. But it could also deprive the police of a crime detection dragnet, with the consequence either that the crime rate will increase due to the lower detection rate or that the police will deploy new crime detection strategies that could create new problems of their own.

Addressing these potentially sweeping changes to the practices of policing brought about by automated vehicle technologies requires considering both the structure of the relevant technology and the law itself. On the technological side, connected and automated vehicles could be designed for easy monitoring and controlling by the police. That could entail a decline in privacy for vehicle occupants, but also potentially reduce the need for physical stops by the police (cars that can be remotely monitored can be remotely ticketed) and hence some of the police-citizen roadside friction that has dominated recent troubles.

On the legal side, the advent of connected and automated vehicles will require rethinking the structure of Fourth Amendment law as required to automobiles. At present, individual rights as against searches and seizures often rely on distinctions between drivers and passengers, or owners and occupants. For example, a passenger in a car may challenge the legality of the police stop of a car, 16 16. Brendlin v. California, 551 U.S. 249 (2007). × but have diminished expectations of privacy in the search of the vehicle’s interior if they are not the vehicle’s owners or bailees. 17 17. U.S. v. Jones, 565 U.S. 400 (2012). × In a mobility fleet without drivers and (as discussed momentarily) perhaps without many individual owners, these conceptions of the relationship of people to cars will require reconsideration.

B. Ownership, Sharing, and the Public/Private Divide

In American culture, the individually owned automobile has historically been far more than a transportation device—it has been an icon of freedom, mobility, and personal identity. As Ted McAllister has written concerning the growth of automobile culture in the early twentieth century:

The automobile squared perfectly with a distinctive American ideal of freedom—freedom of mobility. Always a restless nation, with complex migratory patterns throughout the 17th, 18th, and 19thcenturies, the car came just as a certain kind of mobility had reached an end with the closing of the frontier. But the restlessness had not ended, and the car allowed control of space like no other form of transportation. 18 18. Ted v. McAllister, Cars, Individualism, and the Paradox of Freedom in a Mass Society. ×

Individual car ownership has long been central to conceptions of property and economic status. The average American adult currently spends about ten percent of his or her income on an automobile, 19 19. Máté Petrány, This Is How Much Americans Spend on their Cars. × making it by far his or her most expensive item of personal property. The social costs of individual automobile ownership are far higher. 20 20. Edward Humes, The Absurd Primacy of the Automobile in American Life; Robert Moor, What Happens to the American Myth When You Take the Driver Out of It?. ×

The automobile’s run as an icon of social status through ownership may be ending. Futurists expect that the availability of on-demand automated vehicle service will complete the transition from mobility as personal property to mobility as a service, as more and more households stop buying cars and rely instead on ride sharing services. 21 21. Smart Cities and the Vehicle Ownership Shift. × Ride sharing companies like Uber and Lyft have long been on this case, and now automobile manufacturers are scrambling to market their vehicles as shared services. 22 22. Ryan Felton, GM Aims to Get Ahead of Everyone with Autonomous Ride-Sharing Service in Multiple Cities by 2019. × With the decline of individual ownership, what will happen to conceptions of property in the physical space of the automobile, in the contractual right to use a particular car or fleet of automobiles, and in the data generated about occupants and vehicles?

The coming transition from individual ownership to shared service will also raise important questions about the line between the public and private domains. At present, the “public sphere” is defined by mass transit whereas the individually owned automobile constitutes the “private sphere.” The public sphere operates according to ancient common carrier rules of universal access and non-discrimination, whereas a car is not quite “a man’s castle on wheels” for constitutional purposes, 23 23. See Illinois v. Lidster, 540 U.S. 419, 424 (2004) (“The Fourth Amendment does not treat a motorist’scaras hiscastle.”). × but still a non-public space dominated by individual rights as against the state rather than public obligations. 24 24. E.g., Byrne v. Rutledge, 623 F.3d 46 (2d Cir. 2010) (holding the motor vehicle license plates were nonpublic fora and that state’s ban on vanity plates referencing religious topic violated First Amendment). × As more and more vehicles are held and used in shared fleets rather than individual hands, the traditional line between publicly minded “mass transit” and individually minded vehicle ownership will come under pressure, with significant consequences for both efficiency and equality.

C. Platform Mobility, Competition, and Regulation

The coming transition toward ride sharing fleets rather than individual vehicle ownership described in the previous section will have additional important implications for the economic structure of mobility—which of course will raise important regulatory questions as well. At present, the private transportation system is highly atomistic. In the United States alone, there are 264 million individually owned motor vehicles in operation. 25 25. U.S. Dep’t of Energy, Transportation Energy Data Book, Chapter 8, Household Vehicles and Characteristics, Table 8.1, Population and Vehicle Profile, https://cta.ornl.gov/data/chapter8.shtml (last visited May 29, 2018). × For the reasons previously identified, expect many of these vehicles to shift toward corporate-owned fleets in coming years. The question then will be how many such fleets will operate—whether we will see robust fleet-to-fleet competition or instead the convergence toward a few dominant providers as we are seeing in other important areas of the “platform economy.”

There is every reason to believe that, before too long, mobility will tend in the direction of other monopoly or oligopoly platforms because it will share their economic structure. The key economic facts behind the rise of dominant platforms like Amazon, Twitter, Google, Facebook, Microsoft, and Apple are the presence of scale economies and network effects—system attributes that make the system more desirable for others users as new users join. 26 26. See generally DavidS.Evans& Richard Schmalensee, A Guide to the Antitrust Economics of Networks, Antitrust, Spring 1996, at 36; Michael L. Katz & Carl Shapiro, Systems Competition andNetworkEffects, 8 J. Econ. Persp. 93 (1994). × In the case of the mobility revolution, a number of features are suggestive of future scale economies and network effects. The more cars in a fleet, the more likely it is that one will be available when summoned by a user. The more cars connected to other cars in a fleet, the higher the quality of the information (on such topics as road and weather conditions and vehicle performance) available within the fleet and the steeper the machine learning curve.

As is true with other platforms, the mere presence of scale economic and network effects does not have to lead inexorably to market concentration or monopoly. Law and regulation may intervene to mitigate these effects, for example by requiring information sharing or interconnection among rival platforms. But such mandatory information sharing or interconnection obligations are not always advisable, as they can diminish a platform’s incentives to invest in its own infrastructure or otherwise impair incentives to compete.

Circling back to the “law of the horse” point raised at the outset, these issues are not, of course, unique to law and mobility. But this brief examination of these three topics—policing, ownership, and competition—shows the value of considering law and mobility as a distinct topic. Technological, legal, and regulatory decisions we make with respect to one particular set of problems will have implications for distinct problems perhaps not under consideration at that moment. For example, law and technology will operate conjunctively to define the bounds of privacy expectations in connected and automated vehicles, with implications for search and seizure law, property and data privacy norms, and sharing obligations to promote competition. Pulling a “privacy lever” in one context—say to safeguard against excessive police searches—could have spillover effects in another context, for example by bolstering a dominant mobility platform’s arguments against mandatory data sharing. Although the interactions between the different technological decisions and related legal norms are surely impossible to predict or manage with exactitude, consideration of law and mobility as a system will permit a holistic view of this complex, evolving ecosystem.

Conclusion

Law and regulation will be at the center of the coming mobility revolution. Many of the patterns we will observe at the intersection of law and the new technologies will be familiar—at least if we spend the time to study past technological revolutions—and general principles will be sufficient to answer many of the rising questions. At the same time, there is a benefit to considering the field of law and mobility comprehensively with an eye to understanding the often subtle interactions between discrete technological and legal decisions. The Journal of Law and Mobility aims to play an important role in this fast-moving space.


Frederick Paul Furth, Sr. Professor of Law, University of Michigan. I am grateful for helpful comments from Ellen Partridge and Bryant Walker Smith. All errors are my own.