Policing

On November 19, the NTSB held a public board meeting on the 2018 Uber accident in Tempe, Arizona, involving an “automated” (actually level 3) Uber-operated Volvo SUV. One woman, Elaine Herzberg, a pedestrian, died in the accident. In the wake of the report, it is now a good time to come back to level 3 cars and the question of “safety drivers.”

Given that the purpose of the meeting was to put the blame on someone, media outlets were quick to pick up a culprit for their headlines: the “safety driver” who kept looking at her phone? The sensors who detected all kinds of stuff but never a person? Uber, who deactivated the OEM’s emergency braking? Or maybe, Uber’s “safety culture”? A whole industry’s?

The Board actually blames all of them, steering clear of singling out one event or actor. It is probably the safest and most reasonable course of action for a regulator, and it has relevant implications for how law enforcement will handle accidents involving AVs in the future. But because we are humans, we may stick more strongly with the human part of the story, that of the safety driver.

She was allegedly looking at her phone, “watching TV” as one article put it; following the latest episode of The Voice. The Board determined that she looked at the road one second before the impact. That is short, but under more normal circumstances, enough to smash the brakes. Maybe her foot was far from the pedal; maybe she just did not react because she was not in an “aware” state of mind (“automation complacency,” the report calls it). In any case, it was her job to look on the road, and she was violating Uber’s policy by using her phone while working as a safety driver.

At the time of the accident, the Tempe police released footage from the dash cam, a few seconds up to the impact, showing a poorly-lit street. The relevance of this footage was then disputed in an Ars Technica article which aims to demonstrate how actually well lit the street is, and how just the front lights of the car should have made the victim visible on time. Yet, I think it is too easy to put the blame on the safety driver. She was not doing her job, but what kind of job was it? Humans drive reasonably well, but that’s when we’re actually driving, not sitting in the driver seat with nothing else to do but to wait for something to jump out of the roadside. Even if she had been paying attention, injury was reasonably foreseeable. And even if she would have been driving in broad daylight, there remains a more fundamental problem besides safety driver distraction.

The [NTSB] also found that Uber’s autonomous vehicles were not properly programmed to react to pedestrians crossing the street outside of designated crosswalksone article writes. I find that finding somewhat more appalling than that of a safety driver being distracted. Call that human bias; still I do not expect machines to be perfect. But what this tells us is that stricter monitoring of cellphone usage of safety drivers will not cut it either, if the sensors keep failing. The sensors need to be able to handle this kind of situation. A car whose sensors cannot recognize a slowly crossing pedestrian (anywhere, even in the middle of the highway) does not have its place on a 45-mph road, period.

If there is one thing this accident has shown, it is that “safety drivers” add little to the safety of AVs. It’s a coin flip: the reactivity and skill of the driver makes up for the sensor failure; in other cases, a distracted, “complacent” driver (for any reason, phone or other) does not make up for the sensor failure. It is safe to say that the overall effect on safety is at best neutral. And even worse: it may provide a false sense of safety to the operator, as it apparently did here. This, in turn, prompts us to think about level 3 altogether.

While Uber has stated that it has “significantly improved its safety culture” since the accident, the question of the overall safety of these level 3 cars remains. And beyond everything Uber can do, one may wonder if such accidents are not bound to repeat themselves should level 3 cars see mass commercial deployments. Humans are not reliable “safety drivers.” And in a scenario that involves such drivers, it takes much less than the deadly laundry list of failures we had here to have such an accident happen. Being complacent may also mean that your foot is not close to the pedals, or that your hands are not “hovering above the steering wheel” as they should (apparently) be. That half second extra it takes to smash the brakes or grip the wheel is time enough to transform serious injury into death.

The paramount error here was to integrate a human, a person Uber should have known would be distracted or less responsive than an average driver, as a final safety for sensor failure. Not a long time ago, many industry players were concerned about early standardization. Now that some companies are out there, going fast and literally breaking people (not even things, mind you!), time has come to seriously discuss safety and testing standards, at the US federal and, why not, international level.

A University of Michigan Law School Problem Solving Initiative class on AV standardization will take place during the Winter semester of 2020, with deliverables in April. Stay tuned!

Last time I wrote about platooning, and the potential economic savings that could benefit the commercial trucking sector if heavy duty trucks were to implement the technology. This week, I’m writing about one of the current barriers to implementing platooning both as a commercial method, and in the larger scheme of highway driving.

One of the most readily identifiable barriers to the widespread implementation of truck platooning is the ‘Following Too Close’ (“FTC”) laws enforced by almost every state. There is currently a patchwork of state legislation which prevents vehicles from following too closely behind another vehicle. Violating these laws is negligence per se.

For those who don’t quite remember 1L torts, negligence per se essentially means “if you violate this statute, that proves an element of negligence.” Therefore, if one vehicle is following too closely behind another vehicle in violation of an FTC statute, that satisfies the breach element of negligence and is likely enough to be fined for negligent driving.

These laws are typically meant to prevent vehicles from following dangerously close or tailgating other vehicles. The state laws that regulate this conduct can be divided into roughly four categories. Some states prescribe the distance or time a driver must remain behind the vehicle in front of them; others impose a more subjective standard. The subjective standards are far more common than the objective standards.

Subjective Categories

  • Reasonable and Prudent” requires enough space between vehicles for a safe stop in case of an emergency. This FTC rule is the most common for cars and seems to be a mere codification of common-law rules of ordinary care.
  • “Sufficient space to enter and occupy without danger” requires trucks and vehicles with trailers to leave enough space that another vehicle may “enter and occupy such space without danger.” This is the most common rule for trucks.

Objective Categories

  • Distance-Based: Some states prescribe the distance at which a vehicle may follow another vehicle; others identify a proportionate interval based on distance and speed. These are the most common rules for heavy trucks and frequently set the minimum distance between 300 and 500 feet.
  • Time: Timing is the least common FTC, but the two jurisdictions that impose this rule require drivers to travel “at least two seconds behind the vehicle being followed.”

It is easy to see how, given the close distance at which vehicles need to follow to benefit from platooning, any of these laws would on their face prohibit platooning within their borders. However, several states have already enacted legislation which exempts the trailing truck in a platoon from their “Following Too Close” laws. As of April 2019, 15 states had enacted legislation to that effect. Additional states have passed legislation to allow platoon testing or pilot programs within their states.

However, despite some states enacting this legislation, a non-uniform regulatory scheme does not provide  the level of certainty that will incentivize investment in platooning technology. Uncertain state regulation can disincentivize interstate carriers from investing in platooning, and could lead to a system where platooning trucks only operate within single state boundaries.

Although the exemptions are a step in the right direction, non-uniformity will likely result in an overall lower platooning usage rate, limiting the wide-spread fuel efficiency and safety benefits that are derived when platooning is implemented on a large, interstate scale. Without uniform legislation that allows platooning to be operated consistently across all the states, the need for different systems will hinder the technology’s development, and the rate at which trucking companies begin to adopt it.

However, even if not all states pass legislation exempting platooning vehicles from their FTC laws, there could be a way around the subjective elements. The most common subjective law, “Reasonable and Prudent” requires only enough space that the vehicles can safely stop in case of an emergency. When considering a human driver this distance is likely dozens of feet, given the speed at which cars travel on the interstate. However, recall from last week that platooning vehicles are synchronized in their acceleration, deceleration, and braking.

If the vehicles travel in tandem, and brake at the same time and speed, any distance of greater than several feet would be considered “reasonable and prudent.” Perhaps what needs to be developed is a “reasonable platooning vehicle” standard, rather than a “reasonable driver” standard, when it comes to autonomous vehicle technology. Then again, considering the ever-looming potential for technological failure, it could be argued that following that close behind another heavy vehicle is never reasonable and prudent, once again requiring an exemption rather than an interpretive legal argument for a new “reasonableness” standard.

Either way, to ensure certainty for businesses, more states should exempt platooning vehicles from their “Following Too Close” laws. Otherwise, the technology may never achieve a scale that makes it worth the early investment.

October 2019 Mobility Grab Bag

Every month brings new developments in mobility, so let’s take a minute to breakdown a few recent developments that touch on issues we’ve previously discussed in the blog:

New AV Deployments

This month saw a test deployment of Level 4 vehicles in London, which even allowed members of the public to be passengers (with a safety driver). Meanwhile, in Arizona, Waymo announced it will be deploying vehicles without safety drivers, though it appears only members of their early-access test group will be riding in them for now. We’ve written a lot about Waymo, from some early problems with pedestrians and other drivers, to the regulations placed on them by Arizona’s government, to their potential ability to navigate human controlled intersections.

Georgia Supreme Court Requires a Warrant for Vehicle Data

This Monday, the Georgia Supreme Court, in the case of Mobley v. State, ruled that recovering data from a vehicle without a warrant “implicates the Fourth Amendment, regardless of any reasonable expectations of privacy.” The court found that an investigator entering the vehicle to download data from the vehicle’s airbag control unit constituted “physical intrusion of a personal motor vehicle,” an action which “generally is a search for purposes of the Fourth Amendment under the traditional common law trespass standard.” Given the amount of data that is collected currently by vehicles and the ever-increasing amount of data that CAVs can and will collect, rulings like this are very important in dictating how and when law enforcement can obtain vehicle data. We’ve previously written about CAVs and the 4th Amendment, as well as other privacy implications of CAVs, both in regards to government access to data and the use of CAV data by private parties.  

Personal Cargo Bots Could Bring Even More Traffic to Your Sidewalk

In May, as part of a series on drones, I wrote about a number of test programs deploying small delivery bots for last-mile deliveries via the sidewalk. A recent Washington Post article highlights another potential contender for sidewalk space – personal cargo bots. Called “gita” the bot can travel at up to 6 mph as it uses it’s onboard cameras to track and follow its’ owner, via the owner’s gait. The bot’s developers see it as helping enhance mobility, as it would allow people to go shopping on foot without being concerned about carrying their goods home. For city-dwellers that may improve grocery trips, if they can shell out the $3,000+ price tag!

Even More Aerial Drones to Bring Goods to Your Door

Last month, as part two the drone series, I looked at aerial delivery drones. In that piece I mentioned that Google-owned Wing would be making drone deliveries in Virginia, and Wing recently announced a partnership with Walgreens that will be part of that test. Yesterday Wired pointed out that UPS has made a similar deal with CVS – though it remains to be seen if the drones will have to deliver the infamously long CVS receipts as well. As Wired pointed out, drugstores, since they carry goods that could lead to an emergency when a home runs out of them (like medication and diapers), speedy air delivery could fill a useful niche. So next time you’re home with a cold, you may be able to order decongestant to be flown to your bedside, or at least to the yard outside your bedroom window.

P.S. – While not related to any past writings, this article  is pretty interesting – Purdue scientists took inspiration from the small hairs on the legs of spiders to invent a new sensor that can ignore minor forces acting on a vehicle while detecting major forces, making it easier for CAVs and drones to focus computing power on important things in their environment without getting distracted.

While AVs have a lot of technological leaps to make before widespread deployment, developers and governments alike also need to also consider the human factors involved, including good old fashioned human fear. Earlier this year, a AAA study showed that almost three out of four (71%) Americans are afraid to ride in an AV. This is a 10% rise in apprehension from earlier studies, a trend that could be connected to the publicity around the 2018 Uber crash in Tempe, Ariz., where a test vehicle struck and killed a pedestrian. This lack of trust in AVs alone should be concerning to developers, but in some places that lack of trust has turned into outright enmity.

Test deployments, like the one undertaken by Waymo in Arizona, have become the targets of anger from drivers and pedestrians, including an incident where man pointed a gun at a passing Waymo test vehicle, in full view of the AV’s safety driver. In that case, the man with the weapon (who was arrested) claimed he hated the vehicles, specifically citing the Uber crash as a reason for his anger. Waymo test vehicles have been also been pelted with rocks, had their tires slashed, and motorists have even tried to run them off the road. The incidents have led to caution on the part of Waymo, who has trained their drivers on how to respond to harassment (including how to spot vehicles that are following them, as witnessed by a group of Arizona Republic reporters last December). Arizona is not the only place where this has happened – in California, during a 3 month period of 2018, 2 of the 6 accidents involving AVs were caused by other drivers intentionally colliding with the AV.

So where is this anger coming from? For some in Arizona, it was from feeling that their community was being used as a laboratory, with them as guinea pigs, by AV developers. Ironically, that line of thought has been cited by a number of people who currently oppose the deployment of test AVs in and around Silicon Valley. It’s rather telling that the employees of many of the companies pushing for AV testing don’t want it to occur in their own towns (some going as far as to threaten to “storm city hall” if testing came to Palo Alto…). Other objections may stem from people seeing AVs as a proxy for all automation, and the potential loss of jobs that entails.

So what can be done to make people trust AVs, or at least accept them enough to not run them off the road? On the jobs front, in June a group of Senators introduced a bill to have the Labor Department track jobs being displaced by automation. Responding to the changes brought on by automation is a center point of Democratic Presidential Candidate Andrew Yang’s campaign, and the issue has been raised by other candidates as well. The potential of automation to take away jobs is a long-standing issue made more visible by AVs on the road, and one that won’t be solved by AV proponents alone. What AV supporters have done and can continue to do is attempt to educate the public on now only potential befits of AV deployment (which PAVE, an industry coalition has done), but also better explain just how AV technology works. At least part of the AV fear stems from not understanding how the tech actually operates, and transparency in that vein could go a long way. Future test projects also need to be sure to get input from communities before they start testing, to ease the feeling of AVs being imposed upon an unwilling neighborhood. A recent debate over AV testing in Pittsburgh, where the city obtained funds for community outreach only after approving testing, leading to push back from community members, is a good example of how a proper pre-testing order-of-operations is vital.

For now, there is clearly a lot of room for public engagement and education. Developers should take advantage of this period where AVs are in the public eye without being widely deployed to build trust and understanding, so that once the vehicles start appearing everywhere they are met with open arms, or at least tolerated, rather than ran off the road. After all, while AVs themselves may not feel road rage, it’s already clear they can be victims of it.

P.S. – If you’re interested in learning more about negative reactions to robots, a good starting point is this NY Times article from January 2018.

A write-up of the afternoon sessions is now available here!

March 15, 2019 – 10:00 AM – 5:30 PM

Room 1225, Jeffries Hall, University of Michigan Law School 

In the case of automated driving, how and to whom should the rules of the road apply? This deep-dive conference brings together experts from government, industry, civil society, and academia to answer these questions through focused and robust discussion.

To ensure that discussions are accessible to all participants, the day will begin with an introduction to the legal and technical aspects of automated driving. It will then continue with a more general discussion of what it means to follow the law. After a lunch keynote by Rep. Debbie Dingell, expert panels will consider how traffic law should apply to automated driving and the legal person (if any) who should be responsible for traffic law violations. The day will conclude with audience discussion and a reception for all attendees.

(Re)Writing the Rules of the Road is presented by the University of Michigan Law School’s Law and Mobility Program, and co-sponsored by the University of South Carolina School of Law.

Schedule of Events

Morning Sessions 

  • 10:00 am – 10:45 am

Connected and Automated Vehicles – A Technical and Legal Primer

Prof. Bryant Walker Smith

Professor Bryant Walker Smith will provide a technical and legal introduction to automated driving and connected driving with an emphasis on the key concepts, terms, and laws that will be foundational to the afternoon sessions. This session is intended for all participants, including those with complementary expertise and those who are new to automated driving. Questions are welcome. 

  • 10:45 am – 11:15 am
Drivers Licenses for Robots? State DMV Approaches to CAV Regulation

Bernard Soriano, Deputy Director for the Califorina DMV and James Fackler, Assistant Administrator for the Customer Services Administration in the Michigan Secretary of State’s Office, discuss their respective state’s approaches to regulating connected and autonomous vehicles.

  • 11:15 am – 12:00 pm
Just What Is the Law? How Does Legal Theory Apply to Automated Vehicles and Other Autonomous Technologies?

Prof. Scott Hershovitz    

Human drivers regularly violate the rules of the road. What does this say about the meaning of law? Professor Scott Hershovitz introduces legal theory and relates it to automated driving and autonomy more generally.                  

Keynote & Lunch

  • 12:00 pm – 12:30 pm
Lunch

Free for all registered attendees!

  • 12:30 pm-1:30 pm

Keynote – Rep. Debbie Dingell

Rep. Dingell shares her insights from both national and local perspectives.  

Afternoon Sessions

(Chatham House Rule)

  • 1:30 pm – 3:00 pm
Crossing the Double Yellow Line: Should Automated Vehicles Always Follow the Rules of the Road as Written?

Should automated vehicles be designed to strictly follow the rules of the road? How should these vehicles reconcile conflicts between those rules? Are there meaningful differences among exceeding the posted speed limit to keep up with the flow of traffic, crossing a double yellow line to give more room to a bicyclist, and driving through a stop sign at the direction of a police officer? If flexibility and discretion are appropriate, how can they be achieved in law?

A panel of experts will each briefly present their views on these questions, followed by open discussion with other speakers and questions from the audience.

Featured Speakers:

Justice David F. Viviano, Michigan Supreme Court

Emily Frascaroli, Counsel, Ford Motor Company

Jessica Uguccioni, Lead Lawyer, Automated Vehicles Review, Law Commission of England and Wales

  • 3:15 pm – 4:45 pm
Who Gets the Ticket? Who or What is the Legal Driver, and How Should Law Be Enforced Against Them?

Who or what should decide whether an automated vehicle should violate a traffic law? And who or what should be responsible for that violation? Are there meaningful differences among laws about driving behavior, laws about vehicle maintenance, and laws and post-crash responsibilities? How should these laws be enforced? What are the respective roles for local, state, and national authorities?

A panel of experts will each briefly present their views on these questions, followed by open discussion with other speakers and questions from the audience.

Featured Speakers:

Thomas J. Buiteweg, Partner, Hudson Cook, LLP

Kelsey Brunette Fiedler, Ideation Analyst in Mobility Domain

Karlyn D. Stanley, Senior Policy Analyst, RAND Corporation

Daniel Hinkle, State Affairs Counsel, American Association for Justice

  • 4:45 pm – 5:30 pm 
 Summary and General Discussion                                     

Participants and attendees close out the day by taking part in wide discussion of all of the day’s panels.

Cite as: Raphael Beauregard-Lacroix, (Re)Writing the Rules of The Road: Reflections from the Journal of Law and Mobility’s 2019 Conference, 2019 J. L. & Mob. 97.

On March 15th, 2019, the Journal of Law and Mobility, part of the University of Michigan’s Law and Mobility Program, presented its inaugural conference, entitled “(Re)Writing the Rules of The Road.” The conference was focused on issues surrounding the relationship between automated vehicles (“AVs”) and the law. In the afternoon, two panels of experts from academia, government, industry, and civil society were brought together to discuss how traffic laws should apply to automated driving and the legal person (if any) who should be responsible for traffic law violations. The afternoon’s events occurred under a modified version of the Chatham House Rule, to allow the participants to speak more freely. In the interest of allowing those who did not attend to still benefit from the day’s discussion, the following document was prepared. This document is a summary of the two panels, and an effort has been made to de-identify the speaker while retaining the information conveyed. 

Panel I: Crossing the Double Yellow Line: Should Automated Vehicles Always Follow the Rules of the Road as Written?

The first panel focused on whether automated vehicles should be designed to strictly follow the rules of the road. Questions included – How should these vehicles reconcile conflicts between those rules? Are there meaningful differences between acts such as exceeding the posted speed limit to keep up with the flow of traffic, crossing a double yellow line to give more room to a bicyclist, or driving through a stop sign at the direction of a police officer? If flexibility and discretion are appropriate, how can this be reflected in law? 

Within the panel, there was an overall agreement that we need both flexibility in making the law, and flexibility in the law itself among the participants. It was agreed that rigidity, both on the side of the technology as well as on the side of norms, would not serve AVs well. The debate was focused over just how much flexibility there should be and how this flexibility can be formulated in the law.

One type of flexibility that already exists is legal standards. One participant emphasized that the law is not the monolith it may seem from the outside – following a single rule, like not crossing a double yellow line, is not the end of an individual’s interaction with the law. There are a host of different laws applying to different situations, and many of these laws are formulated as standards – for example, the standard that a person operating a vehicle drives with “due care and attention.” Such an approach to the law may change the reasoning of a judge when it would come to determining liability for an accident involving an AV. 

When we ask if AVs should always follow the law, our intuitive reaction is of course they should. Yet, some reflection may allow one to conclude that such strict programming might not be realistic. After all, human drivers routinely break the law. Moreover, most of the participants explicitly agreed that as humans, we get to choose to break the law, sometimes in a reasonable way, and we get to benefit from the discretion of law enforcement. 

That, however, does not necessarily translate to the world of AVs, where engineers make decisions about code and where enforcement can be automatized to a high degree, both ex ante and ex post. Moreover, such flexibilities in the law needs to be tailored to the specific social need; speeding is a “freedom” we enjoy with our own, personal legacy cars, and this type of law breaking does not fulfill the same social function as a driver being allowed to get on the sidewalk in order to avoid an accident. 

One participant suggested that in order to reduce frustrating interactions with AVs, and to overall foster greater safety, AVs need the flexibility not to follow the letter of the law in some situations. Looking to the specific example of the shuttles running on the University of Michigan’s North Campus – those vehicles are very strict in their compliance with the law. 1 1. Susan Carney, Mcity Driverless Shuttle launches on U-M’s North Campus, The Michigan Engineer (June 4, 2018), https://news.engin.umich.edu/2018/06/mcity-driverless-shuttle-launches-on-u-ms-north-campus/. × They travel slowly, to the extent that their behavior can annoy human drivers. When similar shuttles from the French company Navya were deployed in Las Vegas, 2 2. Paul Comfort, U.S. cities building on Las Vegas’ success with autonomous buses, Axios (Sept. 14, 2018), https://www.axios.com/us-cities-building-on-las-vegas-success-with-autonomous-buses-ce6b3d43-c5a3-4b39-a47b-2abde77eec4c.html. × there was an accident on the very first run. 3 3. Sean O’Kane, Self-driving shuttle crashed in Las Vegas because manual controls were locked away, The Verge (July 11, 2019, 5:32 PM), https://www.theverge.com/2019/7/11/20690793/self-driving-shuttle-crash-las-vegas-manual-controls-locked-away. × A car backed into the shuttle, and when a normal driver would have gotten out of the way, the shuttle did not.

One answer is that we will know it when we see it; or that solutions will emerge out of usage. However, many industry players do not favor such a risk-taking strategy. Indeed, it was argued that smaller players in the AV industry would not be able to keep up if those with deeper pockets decide to go the risky way. 

Another approach to the question is to ask what kind of goals should we be applying to AVs? A strict abidance to legal rules or mitigating harm? Maximizing safety? There are indications of some form of international consensus 4 4. UN resolution paves way for mass use of driverless cars, UN News (Oct. 10, 2018), https://news.un.org/en/story/2018/10/1022812. × (namely in the form of a UN Resolution) 5 5. UN Economic Commission for Europe, Revised draft resolution on the deployment of highly and fully automated vehicles in road traffic (July, 12, 2018), https://www.unece.org/fileadmin/DAM/trans/doc/2018/wp1/ECE-TRANS-WP.1-2018-4-Rev_2e.pdf × that the goal should not be strict abidance to the law, and that other road users may commit errors, which would then put the AV into a situation of deciding between strict legality and safety or harm. 

In Singapore, the government recently published “Technical Reference 68,” 6 6. Joint Media Release, Land Transport Authority, Enterprise Singapore, Standards Development Organization, & Singapore Standards Council, Singapore Develops Provisional National Standards to Guide Development of Fully Autonomous Vehicles (Jan. 31, 2019), https://www.lta.gov.sg/apps/news/page.aspx?c=2&id=8ea02b69-4505-45ff-8dca-7b094a7954f9. × which sets up a hierarchy of rules, such as safety, traffic flow, and with the general principle of minimizing rule breaking. This example shows that principles can act as a sense-check. That being said, the technical question of how to “code” the flexibility of a standard into AV software was not entirely answered. 

Some participants also reminded the audience that human drivers do not have to “declare their intentions” before breaking the law, while AV software developers would have to. Should they be punished for that in advance? Moreover, non-compliance with the law – such as municipal ordinances on parking – is the daily routine for certain business models such as those who rely on delivery. Yet, there is no widespread condemnation of that, and most of us enjoy having consumer goods delivered at home.

More generally, as one participant asked, if a person can reasonably decide to break the law as a driver, does that mean the developer or programmer of AV software can decide to break the law in a similar way and face liability later? Perhaps the answer is to turn the question around – change the law to better reflect the driving environment so AVs don’t have to be programmed to break it. 

Beyond flexibility, participants discussed how having multiple motor vehicle codes – in effect one per US State – makes toeing the line of the law difficult. One participant highlighted that having the software of an AV validated by one state is big enough a hurdle, and that more than a handful of such validations processes would be completely unreasonable for an AV developer. Having a single standard was identified as a positive step, while some conceded that states also serve the useful purpose of “incubating” various legal formulations and strategies, allowing in due time the federal government to “pick” the best one. 

Panel II: Who Gets the Ticket? Who or What is the Legal Driver, and How Should Law Be Enforced Against Them?

The second panel looked at who or what should decide whether an automated vehicle should violate a traffic law, and who or what should be responsible for that violation. Further questions included – Are there meaningful differences among laws about driving behavior, laws about vehicle maintenance, and laws and post-crash responsibilities? How should these laws be enforced? What are the respective roles for local, state, and national authorities?

The participants discussed several initiatives, both public and private, that aimed at defining, or helping define the notion of driver in the context of AVs. The Uniform Law Commission worked on the “ADP”, or “automated driving provider”, which would replace the human driver as the entity responsible in case of an accident. The latest report from the RAND Corporation highlighted that the ownership model of AVs will be different, as whole fleets will be owned and maintained by OEMs (“original equipment manufacturers”) or other types of businesses and that most likely these fleet operators would be the drivers. 7 7. James M. Anderson, et. al., Rethinking Insurance and Liability in the Transformative Age of Autonomous Vehicles (2018), https://www.rand.org/content/dam/rand/pubs/conf_proceedings/CF300/CF383/RAND_CF383.pdf. ×

Insurance was also identified as a matter to take into consideration in the shaping up of the notion of AV driver. As of the date of the conference, AVs are only insured outside of state-sponsored guarantee funds, which aim to cover policy holders in case of bankruptcy of the insurer. Such “non-admitted” insurance means that most insurers will simply refuse to insure AVs. Who gets to be the driver in the end may have repercussions on whether AVs become insurable or not. 

In addition, certain participants stressed the importance of having legally recognizable persons bear the responsibility – the idea that “software” may be held liable was largely rejected by the audience. There should also be only one such person, not several, if one wants to make it manageable from the perspective of the states’ motor vehicle codes. In addition, from a more purposive perspective, one would want the person liable for the “conduct” of the car to be able to effectuate required changes so to minimize the liability, through technical improvements for example. That being said, such persons will only accept to shoulder liability if costs can be reasonably estimated. It was recognized by participants that humans tend to trust other humans more than machines or software, and are more likely to “forgive” humans for their mistakes, or trust persons who, objectively speaking, should not be trusted.

Another way forward identified by participants is product liability law, whereby AVs would be understood as a consumer good like any other. The question then becomes one of apportionment of liability, which may be rather complex, as the experience of the Navya shuttle crash in Las Vegas has shown. 

Conclusion

The key takeaway from the two panels is that AV technology now stands at a crossroads, with key decisions being taken as we discuss by large industry players, national governments and industry bodies. As these decisions will have an impact down the road, all participants and panelists agreed that the “go fast and break things” approach will not lead to optimal outcomes. Specifically, one line of force that comes out from the two panels is the idea that it is humans who stand behind the technology, humans who take the key decisions, and also humans who will accept or reject commercially-deployed AVs, as passengers and road users. As humans, we live our daily lives, which for most of us include using roads under various capacities, in a densely codified environment. However, this code, unlike computer code, is in part unwritten, flexible and subject to contextualization. Moreover, we sometimes forgive each others’ mistakes. We often think of the technical challenges of AVs in terms of sensors, cameras and machine learning. Yet, the greatest technical challenge of all may be to express all the flexibility of our social and legal rules into unforgivably rigid programming language. 

One of the most persistent issues in public transportation is the so-called “last mile” problem. The essence of the problem is that, if the distance between the nearest transit stop and a rider’s home or office is too far to comfortably walk, potential riders will be more likely to drive than use public transit. The rise of smartphone enabled mobility options like ridesharing, bike-share, and e-scooters have been pitched as potential solutions to this problem. However, some cities have found that these technologies may create as many problems as they solve.

This post will focus in particular on the rise of
e-scooters. Over roughly the last two years, e-scooters from companies like
Bird and Lime have proliferated across American cities. Often appearing seemingly
out of nowhere
as companies frequently launch the product by dropping off a
batch of scooters overnight without warning, they have been a source of angst
for many city officials.

As the scooters spread, ridership has proliferated. Thanks
to ease of use, the proliferation of smartphones, and increasing comfort with
new forms of mobility, ridership
has accelerated
at a faster pace than ride-hailing apps, bikeshare
programs, or other mobility platforms that have developed in recent years.

With this growth though has come challenges. In June, Nashville
chose to ban e-scooters in the aftermath of the city’s first rider death. Last
year, in response to concerns about safety and obstruction of sidewalks, Cleveland
banned e-scooters. In the initial rollout period Cleveland was far from alone,
as cities from St. Louis to San Francisco to Santa Monica also moved to ban
or significantly reduce
the number of scooters allowed.

Some
of these bans, or at least use restrictions, may have been justified. Because
they have no defined ports at which to be put away, scooters are often left
blockading the sidewalk. At least 8 scooter riders have
died
in crashes, and users often remain confused about what laws apply to
them and where they can ride. Hospitals across the country have seen a
spike
in emergency room visits related to scooter crashes, and the Centers
for Disease Control has found that head
trauma
is the most common injury resulting from a scooter crash.

Slowly
though, cities have begun experimenting with ways to let scooters in without
letting them run wild. Last month Cleveland
allowed scooters back in, with new limitations on where they are allowed to go
and who is allowed to ride. Norfolk,
VA
recently contracted with e-scooter company Lime to allow them to have a
local monopoly over scooter service in the city. The move may allow Norfolk
greater control over how Lime operates within its borders, which could
ultimately increase safety.

Given
the obvious potential for e-scooters to increase mobility to parts of a city
that aren’t within easy walking distance of transit stations, cities should
continue working to find ways to allow them in while mitigating safety
concerns. The results in cities like Norfolk and Cleveland that are working to
introduce regulation to this new industry will be important to watch in the
coming months.

As we move towards a future of fully automated vehicles, the types of crime – and attendant need for criminal enforcement – committed with cars is likely to evolve. As our transit system becomes more automated, the danger of a hack, and the difficulty of discovering the crime through ordinary policing tactics, is likely to increase. Some experts have expressed concerns that automated vehicles would be just as easy to use for delivery of drugs or guns as for more innocuous packages. Others, such as Duke University professor Mary Cummings, say that vehicles are too easy to hack and steer off course.

Going beyond relatively ordinary crimes such as theft, an unclassified FBI report obtained by The Guardian revealed the agency’s concern that autonomous vehicles could be commandeered and utilized as a “potential lethal weapon” or even self-driving bomb.

The likelihood that automated vehicles will generally obey the traffic laws complicates the ability of police to find crimes being committed with these vehicles using traditional methods. As I have written previously, traffic stops prompted by minor violations are a point of contact at which cops often look for evidence of more serious crime. While there is some hope that a reduction in such stops may reduce racial bias in policing, it also highlights the need for law enforcement to reduce dependence on this method of tracking serious crime.

While the potential for criminal activity or even terrorism using automated vehicles is a real possibility, some experts are less concerned. Arthur Rizer, from the conservative think tank R Street Institute, argued that the lives saved by adoption of driverless technology will far outweigh any risk of criminal or terror threat from a hacking. Rizer calls the risk “minute compared to the lives that we will save just from reducing traffic accidents.”

If a significant portion of the roughly 40,000 traffic fatalities per year can be prevented by the adoption of automated vehicles, Rizer is likely correct that the benefits will outweigh any risk that vehicles will be hacked by bad actors. Nevertheless, there is a possibility that, as CalTech professor Patrick Lin warns, automated vehicles “may enable new crimes that we can’t even imagine today.” Going forward, it will be important for law enforcement to develop new techniques of tracking crime facilitated by automated vehicles.

Earlier this month, the Journal of Law and Mobility hosted our first annual conference at the University of Michigan Law School. The event provided a great opportunity to convene some of the top minds working at the intersection of law and automated vehicles. What struck me most about the conference, put on by an organization dedicated to Law and mobility, was how few of the big questions related to automated vehicles are actually legal questions at this point in their development.

The afternoon panel on whether AVs should always follow the rules of the road as written was emblematic of this juxtaposition. The panel nominally focused on whether AVs should follow traffic laws. Should an automated vehicle be capable of running a red light, or swerving across a double yellow line while driving down the street? Should it always obey the posted speed limit?

The knee-jerk reaction of most people would probably be something along the lines of, “of course you shouldn’t program a car that can break the law.” After all, human drivers are supposed to follow the law. So why should an automated vehicle, which is programmed in advance by a human making a sober, conscious choice, be allowed to do any differently?

Once you scratch the surface though, the question becomes much more nuanced. Human drivers break the law in all kinds of minor ways in order to maintain safety, or in response to the circumstances of the moment. A human driver will run a red light if there is no cross-traffic and the car bearing down from behind is showing no signs of slowing down. A human will drive into the wrong lane or onto the shoulder to avoid a downed tree branch, or a child rushing out into the street. A human driver may speed away if they notice a car near them acting erratically. All of these actions, although they violate the law, may be taken in the interest of safety in the right circumstances. Even knowing they violated the law, a human driver who was ticketed in such a circumstance would feel their legal consequence was unjustified.

If automated vehicles should be able to break the law in at least some circumstances, the question shifts – which circumstances? Answering that question is beyond the scope of this post. At the moment, I don’t think anyone has the right answer. Instead, the point of this post is to highlight the type of moment-to-moment decisions every driver makes every day to keep themselves and those around them safe. The rules of the road provide a rough cut, codifying what will be best for most people most of the time. They could not possibly anticipate every situation and create a special legal rule for that situation. If they tried, the traffic laws would quickly grow to fill several libraries.

In my view, the question of whether an AV should be able to break the law is only tangentially a legal question. After arriving at an answer of, “probably sometimes,” the question quickly shifts to when, and in what circumstances, and whether the law needs to adapt to make different maneuvers legal. These questions have legal aspects to them, but they are also moral and ethical questions weighted with a full range of human driving experience.  Answering them will be among the most important and difficult challenges for the AV industry in the coming years.

Guest Blog by Jesse Halfon

Last month, two California Highway Patrol (CHP) officers made news following an arrest for drunk driving. What made the arrest unusual was that the officers initially observed the driver asleep behind the wheel while the car, a Tesla Model S, drove 70 mph on Autopilot, the vehicle’s semi-automated driving system.

Much of the media coverage about the incident revolved around the CHP maneuver to safely bring the vehicle to a stop. The officers were able to manipulate Tesla Autopilot to slow down and ultimately stop mid-highway using two patrol vehicles, one in front and one behind the ‘driverless’ car.

But USC Law Professor Orin Kerr mused online about a constitutional quandary relating to the stop, asking, “At what point is a driver asleep in an electric car that is on autopilot “seized” by the police slowing down and stopping the car by getting in front of it?” This question centered around when a person asleep was seized,a reasonable 4th Amendment inquiry given the U.S. Supreme Court standard that a seizure occurs when a reasonable person would not have felt ‘free to leave’ or otherwise terminate the encounter with law enforcement.[1] 

Kerr’s issue was largely hypothetical given that the police in this situation unquestionably had the legal right to stop the vehicle (and thereby seize the driver) based on public safety concerns alone.

However, a larger 4th Amendment question regarding semi-automated vehicles looms. Namely, what constitutes’reasonable suspicion’ to stop the driver of a vehicle on Autopilot for a traditional traffic violation like ‘reckless driving’ or ‘careless driving’?[2] Though there are no current laws that prescribe the safe operation of a semi-autonomous vehicle, many common traffic offenses are implicated by the use of automated driving features.

Some ‘automated’ traffic violations will be unchanged from the perspective of law enforcement. For example, if a vehicle on Autopilot[3] fails to properly stay inits lane, the officer can assess the vehicle’s behavior objectively and ticket the driver who is ultimately responsible for safe operation of the automobile.Other specific traffic violations will also be clear-cut. New York, for example still requires by statute that a driver keep at least one hand on the wheel.[4] Many states ban texting while driving, which though often ambiguous, allows for more obvious visual cues for an officer to assess.

However, other traffic violations like reckless driving[5] will be more difficult to assess in the context of semi-automated driving.

YouTube is filled with videos of people playing cards, dancing, and doing various other non-driving activities in their Teslas while Autopilot is activated. While most of these videos are performative, real-world scenarios are commonplace. Indeed, for many consumers, the entire point of having a semi-autonomous driving system is to enable safe multi-tasking while behind the wheel.

Take for example, the Tesla driver who is seen biting into a cheeseburger with both hands on the sandwich (and no hands on the wheel). Is this sufficient for an officer to stop a driver for careless driving?Or what about a driver writing a note on a piece of paper in the center console while talking on the phone. If during this activity, the driver’s eyes are off the road for 3-4 seconds, is there reasonable suspicion of ‘reckless driving’that would justify a stop? 5-6 seconds? 10? 20?

In these types of cases, the driver may argue that they were safely monitoring their semi-automated vehicle within the appropriate technological parameters. If a vehicle is maintaining a safe speed and lane-keeping on a low traffic highway, drivers will protest – how can they be judged as ‘careless’ or ‘reckless’ for light multi-tasking or brief recreation while the car drives itself?

The 4th Amendment calculus will be especially complicated for officers given that they will be unable to determine from their vantage point whether a semi-autonomous system is even activated. Autopilot is an optional upgrade for Tesla vehicles and vehicles that are equipped with L2/L3 systems will often be driven inattentively without the ‘driverless’ feature enabled. Moreover, most vehicles driven today don’t even have advanced automated driving features.

A Tesla driver whose hands are off the steering wheel could be safely multi-tasking using Autopilot. But they could also be steering with their legs or not at all. This leaves the officer, tasked with monitoring safe driving for public protection, in a difficult situation. It also leaves drivers, who take advantage of semi-automated systems, vulnerable to traffic stops that are arguably unnecessary and burdensome.

Of course, a driver may succeed in convincing a patrol office not to issue a ticket by explaining their carefully considered use of the semi-automated vehicle. Or the driver could have a ‘careless driving’ ticket dismissed in court using the rational of safely using the technology. But once a police-citizen interaction is initiated, the stakes are high.

Designing a semi-automated vehicle that defines the parameters of safe driving is complex. Crafting constitutional jurisprudence that defines the parameters police behavior may be even more complex. Hopefully the Courts are up to the task of navigating this challenging legal terrain.

Jesse Halfon is an attorney in Dykema’s Automotive and Products Liability practice group and a member of its Mobility and Advanced Transportation Team.


[1] United States v. Mendenhall, 446 U.S. 544, 554 (1980); United States v. Drayton, 536 U.S. 194, 202 (2002);Florida v. Bostick, 501 U.S. 429, 435-36 (1991).

[2] Some traffic violations are misdemeanors or felonies. To make an arrest in public for a misdemeanor, an officer needs probable cause and the crime must have occurred in the officer’s presence.  For a Terry stop involving a traffic misdemeanor, only reasonable suspicion is required.

[3] Tesla Autopilot is one of several semi-automated systems currently on the market. Others,including Cadillac Super Cruise Mercedes-Benz Drive Pilot and Volvo’s Pilot Assist offer comparable capabilities.

[4] New York Vehicle and Traffic Law § 1226.

[5] Most states have a criminal offense for reckless driving. Michigan’s statute is representative and defines reckless driving as the operation of a vehicle “in willful or wanton disregard for the safety of persons or property”.  See Michigan Motor Vehicle Code § 257.626. Michigan also has a civil infraction for careless driving that is violated when a vehicle is operated in a ‘careless or negligent manner’. See Michigan Motor Vehicle Code § 257.626b

CAVs and the Traffic Stop

The traffic stop has long been a primary point of interaction between police and the community. As consent Department of Justice (DOJ) investigations into local police departments in Ferguson, Baltimore, and Chicago made clear in recent years, they are also a moment that is open to large-scale abuse. The rise of connected and autonomous vehicles (CAVs) will fundamentally alter, and perhaps dramatically reduce the occurrence of, this common police tactic. In order to avoid replicating the problematic aspects of traffic stops, communities need to grapple with the ways in which their current system has failed, and how policing should look in the future.

Local police departments in at least some parts of the country have been found to use routine traffic stops as a fundraising tool for the city. Due to either implicit or explicit bias, such policies frequently have an outsized impact on minority members of the community. DOJs investigation of the Ferguson Police Department unearthed a city government primarily concerned with the use of traffic stops to “fill the revenue pipeline.” Particularly in light of decreased sales tax revenue, city officials saw the need to increase traffic citations as “not an insignificant issue.” This attitude filtered down from the City Council and Financial Director to line officers, who were regularly reminded of the need to increase “traffic productivity.” In Ferguson, demand that the police department be a revenue generation machine contributed to racial bias in the city’s criminal justice system. African American drivers were the subjects of 85% of the traffic stops, despite constituting only 67% of the population. Among those stopped, 11% of black drivers were searched, compared to only 5% of white drivers. While the Ferguson report throws the twin problems of racialized policing and use of the police for revenue generation into stark relief, the city is far from alone. The investigations in Baltimore and Chicago found similar abuses. A review of academic literature by researchers at Princeton found that “Blacks and Hispanics are more likely to be stopped by the police, convicted of a crime, and . . . issued a lengthy prison sentence” than similarly situated whites.

These findings highlight the centrality of the traffic stop to modern policing. Traffic stops not only lead directly to citations – for speeding, missing stop signs, and the like – but also to searches of individuals and vehicles that may lead to more serious crimes for things like possession of drugs or weapons. The importance of traffic stops has been spurred on by a Supreme Court that has given its blessing to pretextual stops, in which an officer can stop a car as long as there is a valid reason, regardless of their actual reason. Widespread use of CAVs, however, could seriously cut down on pretextual stops. If a CAV is programmed to travel no faster than the speed limit, to always signal turns, and to never run a red light after all, the number of available pretexts is significantly reduced. While many commentators have been hesitant to think that this shift will lead to large-scale shifts in police tactics or a significant reduction in abuses, they have at least highlighted that possibility.

While CAVs and other new technology may lead to a shift in police tactics, they alone will not eliminate, and may not even reduce, biased policing. Unless addressed through changes to underlying structures of taxation or spending, the financial imperative to turn the police force into a revenue generator will continue to drive over-policing of minor violations. Without addressing implicit bias, this over-policing will continue to disproportionately target minority communities. The CAV era may channel these pressures in new directions. But cities that wish to address the ongoing challenge of racially biased policing must initiate structural changes, rather than merely hope that technology will save them.

Two recent news stories build interestingly on my recent blog post about CAVs and privacy. The first, from Forbes, detailing law enforcement use of “reverse location” orders, where by investigators can obtain from Google information on all Google users in a given location at a given time. This would allow, for example, police to obtain data on every Google account user within a mile of a gas station when it was robbed. Similar orders have been used to obtain data from Facebook and Snapchat.

Look forward a few years and it’s not hard to imagine similar orders being sent to the operators of CAVs, to obtain the data of untold numbers of users at the time of a crime. The problem here is that such orders can cast far too wide a net and allow law enforcement access to the data of people completely uninvolved with the case being investigated. In one of the cases highlighted by Forbes, the area from which investigators requested data included not only the store that was robbed, but also nearby homes. The same situation could occur with CAVs, pulling in data from passengers completely unrelated to a crime scene who happen to have been driving nearby.

The other story comes from The Verge, which covers data mining done by GM in Los Angeles and Chicago in 2017.  From the article:

GM captured minuted details such as station selection, volume level, and ZIP codes of vehicle owners, and then used the car’s built-in Wi-Fi signal to upload the data to its servers. The goal was to determine the relationship between what drivers listen to and what they buy and then turn around and sell the data to advertisers and radio operators. And it got really specific: GM tracked a driver listening to country music who stopped at a Tim Horton’s restaurant. (No data on that donut order, though.)

That’s an awful lot of information on a person’s daily habits. While many people have become accustomed (or perhaps numb) to the collection of their data online, one wonders how many have given thought to the data collected by their vehicle. The article also points out scale of the data collected by connected cars and what it could be worth on the market:

According to research firm McKinsey, connected cars create up to 600GB of data per day — the equivalent of more than 100 hours of HD video every 60 minutes — and self-driving cars are expected to generate more than 150 times that amount. The value of this data is expected to reach more than $1.5 trillion by the year 2030, McKinsey says.

Obviously, creators and operators of CAVs are going to want to tap into the market for data. But given the push for privacy legislation I highlighted in my last post, they may soon have to contend with limits on just what they can collect.

~ P.S. I can’t resist adding a brief note on some research from my undergraduate alma mater, the University of Illinois. It seems some researchers there are taking inspiration from the eyes of mantis shrimp to improve the capability of CAV cameras.

 

By the end of this year, Alphabet subsidiary Waymo plans to launch one of the nation’s first commercial driverless taxi services in Phoenix, Arizona. As preparations move forward, there has been increasing attention focused on Arizona’s regulatory scheme regarding connected and automated vehicles (CAVs), and the ongoing debate over whether and how their deployment should be more tightly controlled.

In 2015, Arizona Governor Doug Ducey issued an executive order directing state agencies to “undertake any steps necessary to support the testing and operation of self-driving vehicles” on public roads in the state. The order helped facilitate the Phoenix metro area’s development as a key testing ground for CAV technology and laid the groundwork for Waymo’s pioneering move to roll out its driverless service commercially in the state. It has also been the target of criticism for not focusing enough on auto safety, particularly in the aftermath of a deadly crash involving an Uber-operated CAV in March.

As the technology advances and the date of Waymo’s commercial rollout approaches, Governor Ducey has issued a new executive order laying out a few more requirements that CAVs must comply with in order to operate on Arizona’s streets. While the new order is still designed to facilitate the proliferation of CAVs, it includes new requirements that CAV owners affirm that the vehicles meet all relevant federal standards, and that they are capable of reaching a “minimal risk condition” if the autonomous system fails.

Along with these basic safety precautions, the order also directs the Arizona Departments of Public Safety and Transportation to issue a protocol for law enforcement interaction with CAVs. This protocol is a public document intended both to guide officers in interactions with CAVs and to facilitate owners in designing their cars to handle those interactions. The protocol, issued by the state Department of Transportation in May, requires CAV operators to file an interaction protocol with the Department explaining how the vehicle will operate during emergencies and in interactions with law enforcement. As CAVs proliferate, a uniform standard for police interactions across the industry may become necessary for purposes of administrative efficiency. If and when that occurs, the initial standard set by Waymo in Arizona is likely to bear an outsized influence on the nationwide industry.

Critics have called the new executive order’s modest increase in safety requirements too little for such an unknown and potentially dangerous technology. Even among critics however, there is no agreement as to how exactly CAVs should be regulated. Many have argued for, at minimum, more transparency from the CAV companies regarding their own safety and testing procedures. On the other hand, advocates of Arizona’s relaxed regulatory strategy suggest that public unease with CAVs, along with the national news coverage of each accident, will be enough to push companies to adopt their own stringent testing and safety procedures.

This more hands-off regulatory approach will get its first close-up over the next few months in Arizona. The results are likely to shape the speed and direction of growth in the industry for years to come.

 

For many people, syncing their phone to their car is a convenience – allowing them to make hands-free calls or connect to media on their phone through the car’s infotainment system. But doing so can leave a lot of data on the car’s hardware, even after a user believes they have deleted such data. That was the case in a recent ATF investigation into narcotics and firearms trafficking, where federal law enforcement agents were issued a warrant to search a car’s computer for passwords, voice profiles, contacts, call logs, and GPS locations, all of which they believed had been left on the car’s on-board memory. While it’s uncertain just what was recovered, an executed search warrant found by Forbes claims the information extraction was successful.

While this case doesn’t necessarily raise the same issues of government access to data found in the Supreme Court’s recent Carpenter decision, it does illustrate the growing amount of personal data available to outside actors via the computer systems within our vehicles. And while the 4th Amendment can (usually) shield individuals from overreach by government, personal data represents a potential target for malicious actors, as shown by the recent data breach at Facebook which exposed the data of 30 million users. As cars become yet another part of the greater “internet of things,” (IoT) automakers have to confront issues of data protection and privacy. Security researchers have already began to prod vehicle systems for weaknesses – one group was able to breach the computer of a Mazda in 10 seconds.

There has of late been a great deal of talk, and some action, in Washington, Brussels, and Sacramento, towards mandating greater privacy and security standards. Earlier this month, the Senate Commerce Committee held a hearing on Data Privacy in the wake of the European Union’s General Data Protection Regulation, which took effect in May, and California’s Consumer Privacy Act, which was passed in June. Last month, California also passed a bill that sets cybersecurity standards for IoT devices – and there are similar bills that have been introduced in the House and Senate. While it remains to be seen if either of those bills gain traction, it is clear that there is an interest in more significant privacy legislation at the state and federal level, an interest that has to be considered by automakers and other CAV developers as CAVs move closer and closer to wide-scale deployment.

Cite as: Daniel A. Crane, The Future of Law and Mobility, 2018 J. L. & Mob. 1.

Introduction

With the launch of the new Journal of Law and Mobility, the University of Michigan is recognizing the transformative impact of new transportation and mobility technologies, from cars, to trucks, to pedestrians, to drones. The coming transition towards intelligent, automated, and connected mobility systems will transform not only the way people and goods move about, but also the way human safety, privacy, and security are protected, cities are organized, machines and people are connected, and the public and private spheres are defined.

Law will be at the center of these transformations, as it always is. There has already been a good deal of thinking about the ways that law must adapt to make connected and automated mobility feasible in areas like tort liability, insurance, federal preemption, and data privacy. 8 8. See, e.g., Daniel A. Crane, Kyle D. Logue & Bryce Pilz, A Survey of Legal Issues Arising from the Deployment of Autonomous and Connected Vehicles, 23 Mich. Tel. & Tech. L. Rev. 191 (2017). × But it is also not too early to begin pondering the many implications for law and regulation arising from the technology’s spillover effects as it begins to permeate society. For better or worse, connected and automated mobility will disrupt legal practices and concepts in a variety of ways additional to the obvious “regulation of the car.” Policing practices and Fourth Amendment law, now so heavily centered on routine automobile stops, will of necessity require reconsideration. Notions of ownership of physical property (i.e., an automobile) and data (i.e., accident records) will be challenged by the automated sharing economy. And the economic and regulatory structure of the transportation network will have to be reconsidered as mobility transitions from a largely individualistic model of drivers in their own cars pursuing their own ends within the confines of general rules of the road to a model in which shared and interconnected vehicles make collective decisions to optimize the system’s performance. In these and many other ways, the coming mobility revolution will challenge existing legal concepts and practices with implications far beyond the “cool new gadget of driverless cars.”

Despite the great importance of the coming mobility revolution, the case for a field of study in “law and mobility” is not obvious. In this inaugural essay for the Journal of Law and Mobility, I shall endeavor briefly to make that case.

I. Driverless Cars and the Law of the Horse

A technological phenomenon can be tremendously important to society without necessarily meriting its own field of legal study because of what Judge Frank Easterbrook has described as “the law of the horse” problem. 9 9. Frank H.Easterbrook,Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 207, 207-16. × Writing against the burgeoning field of “Internet law” in the early 1990s, Easterbrook argued against organizing legal analysis around particular technologies:

The best way to learn the law applicable to specialized endeavors is to study general rules. Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on “The Law of the Horse” is doomed to be shallow and to miss unifying principles. 10 10. Id. ×

Prominent advocates of “Internet law” as a field rebutted Easterbrook’s concern, arguing that focusing on cyberlaw as a field could be productive to understanding aspects of this important human endeavor in ways that merely studying general principles might miss. 11 11. Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). × Despite Easterbrook’s protestation, a distinct field of cyberlaw has grown up in recent decades.

“The law of the horse” debate seems particularly apt to the question of law and mobility since the automobile is the lineal successor of the horse as society’s key transportation technology. Without attempting to offer a general solution to the “law of the horse” question, it is worth drawing a distinction between two different kinds of disruptive technologies—those in which the technological change produces social changes indirectly and without significant possibilities for legal intervention, and those in which law is central to the formation of the technology itself.

An example of the first species of technological change is air conditioning. The rise of air conditioning in the mid-twentieth century had tremendous effects on society, including dramatic increases in business productivity, changes in living patterns as people shifted indoors, and the extension of retail store hours and hence the growing commercialization of American culture. 12 12. Stan Cox, Losing Our Cool: Uncomfortable Truths About Our Air-Conditioned World (and Finding New Ways to Get Through the Summer) (2012). × The South’s share of U.S. population was in steady decline until the 1960s when, in lockstep with the growth of air conditioning and people’s willingness to settle in hot places, the trend abruptly reversed and the South’s share grew dramatically. 13 13. Paul Krugman, Air Conditioning and the Rise of the South, New York Times March 28, 2015. × The political consequences were enormous—from Richard Nixon through George W. Bush, every elected President hailed from warm climates.

One could say, without exaggeration, that the Willis Carrier’s frigid contraption exerted a greater effect on American business, culture, and politics than almost any other invention in the twentieth century. And, yet, it would seem silly to launch a field of study in “law and air conditioning.” Air conditioning’s social, economic, and political effects were largely indirect—the result of human decisions in response to the new circumstances created by the new technology rather than an immediate consequence of the technology itself. Even if regulators had foreseen the dramatic demographic effects of air conditioning’s spread, there is little they could have done (short of killing or limiting the technology) to mediate the process of change by regulating the technology.

Contrast the Internet. Like air conditioning, the Internet has had tremendous implications for culture, business, and politics, but unlike air conditioning, many of these effects were artifacts of design decisions regarding the legal architecture of cyberspace. From questions of taxation of online commercial transactions, 14 14. See, e.g., John E. Sununu, The Taxation of Internet Commerce, 39 Harv. J. Leg. 325 (2002). × to circumvention of digital rights management technologies, 15 15. See, e.g., David Nimmer, A Rif on Fair Use in the Digital Millenium Copyright Act, 148 U. Pa. L. Rev. 673 (2000). × to personal jurisdiction over geographically remote online interlocutors, 16 16. Note, No Bad Puns: A Different Approach to the Problem of Personal Jurisdiction and the Internet, 116 Harv. L. Rev. 1821 (2003). × and in countless other ways, a complex of legal and regulatory decisions created the modern Internet. From the beginning, law was hovering over the face of cyberspace. Al Gore may not have created the Internet, but lawyers had as much to do with it as did engineers.

The Internet’s legal architecture was not established at a single point in time, by a single set of actors, or with a single set of ideological commitments or policy considerations. Copyright structures were born of the contestation among one set of stakeholders, which was distinct from the sets of stakeholders contesting over tax policy, net neutrality, or revenge porn. And yet, the decisions made in separate regulatory spheres often interact in underappreciated ways to lend the Internet its social and economic character. Tax policy made Amazon dominant in retail, copyright policy made Google dominant in search, and data protection law (or its absence) made Facebook dominant in social media—with the result that all three have become antitrust problems.

Whether or not law students should be encouraged to study “Internet law” in a discrete course, it seems evident with the benefit of thirty years of hindsight that the role of law in mediating cyberspace cannot be adequately comprehended without a systemic inquiry. Mobility, I would argue, will be much the same. While the individual components of the coming shift toward connectivity and automation—i.e., insurance, tort liability, indemnification, intellectual property, federal preemption, municipal traffic law, etc.—will have analogues in known circumstances and hence will benefit from consideration as general questions of insurance, torts, and so forth, the interaction of the many moving parts will produce a novel, complex ecosystem. Given the potential of that ecosystem to transform human life in many significant ways, it is well worth investing some effort in studying “law and mobility” as a comprehensive field.

II. An Illustration from Three Connected Topics

It would be foolish to attempt a description of mobility’s future legal architecture at this early stage in the mobility revolution. However, in an effort to provide some further motivation for the field of “law and mobility,” let me offer an illustration from three areas in which legal practices and doctrines may be affected in complex ways by the shift toward connected and automated vehicles. Although these three topics entail consideration of separate fields of law, the technological and legal decisions made with respect to them could well have system-wide implications, which shows the value of keeping the entire system in perspective as discrete problems are addressed.

A. Policing and Public Security

For better or for worse, the advent of automated vehicles will redefine the way that policing and law enforcement are conducted. Routine traffic stops are fraught, but potentially strategically significant, moments for police-citizen interactions. Half of all citizen-police interactions, 17 17. Samuel Walker, Science and Politics in Police Research: Reflections on their Tangled Relationship, 593 Annals Am. Acad. Pol. & Soc. Sci. 137, 142 (2004); ATTHEW R. DUROSE ET. AL., U.S. DEP’T OF JUSTICE, OFFICE OF JUSTICE PROGRAMS, BUREAU OF JUSTICE STATISTICS, CONTACTS BETWEEN POLICE AND THE PUBLIC, 2005, 1 (2007). × more than forty percent of all drug arrests, 18 18. David A. Sklansky,Traffic Stops, Minority Motorists, and the Future of the Fourth Amendment, 1997SUP. CT. REV. 271, 299. × and over 30% of police shootings 19 19. Adams v. Williams, 407 U.S. 143, 148 n.3 (1972). × occur in the context of traffic stops. Much of the social tension over racial profiling and enforcement inequality has arisen in the context of police practices with respect to minority motorists. 20 20. Ronnie A. Dunn, Racial Profiling: A Persistent Civil Rights Challenge Even in the Twenty-First Century, 66 Case W. Res. L. Rev. 957, 979 (2016) (reporting statistics on disproportionate effects on racial minorities of routine traffic stops). × The traffic stop is central to modern policing, including both its successes and pathologies.

Will there continue to be routine police stops in a world of automated vehicles? Surely traffic stops will not disappear altogether, since driverless cars may still have broken taillights or lapsed registrations. 21 21. See John Frank Weaver, Robot, Do You Know Why I Stopped You?. × But with the advent of cars programmed to follow the rules of the road, the number of occasions for the police to stop cars will decline significantly. As a general matter, the police need probable cause to stop a vehicle on a roadway. 22 22. Whren v. U.S., 517 U.S. 806 (1996). × A world of predominantly automated vehicles will mean many fewer traffic violations and hence many fewer police stops and many fewer police-citizen interactions and arrests for evidence of crime discovered during those stops.

On the positive side, that could mean a significant reduction in some of the abuses and racial tensions around policing. But it could also deprive the police of a crime detection dragnet, with the consequence either that the crime rate will increase due to the lower detection rate or that the police will deploy new crime detection strategies that could create new problems of their own.

Addressing these potentially sweeping changes to the practices of policing brought about by automated vehicle technologies requires considering both the structure of the relevant technology and the law itself. On the technological side, connected and automated vehicles could be designed for easy monitoring and controlling by the police. That could entail a decline in privacy for vehicle occupants, but also potentially reduce the need for physical stops by the police (cars that can be remotely monitored can be remotely ticketed) and hence some of the police-citizen roadside friction that has dominated recent troubles.

On the legal side, the advent of connected and automated vehicles will require rethinking the structure of Fourth Amendment law as required to automobiles. At present, individual rights as against searches and seizures often rely on distinctions between drivers and passengers, or owners and occupants. For example, a passenger in a car may challenge the legality of the police stop of a car, 23 23. Brendlin v. California, 551 U.S. 249 (2007). × but have diminished expectations of privacy in the search of the vehicle’s interior if they are not the vehicle’s owners or bailees. 24 24. U.S. v. Jones, 565 U.S. 400 (2012). × In a mobility fleet without drivers and (as discussed momentarily) perhaps without many individual owners, these conceptions of the relationship of people to cars will require reconsideration.

B. Ownership, Sharing, and the Public/Private Divide

In American culture, the individually owned automobile has historically been far more than a transportation device—it has been an icon of freedom, mobility, and personal identity. As Ted McAllister has written concerning the growth of automobile culture in the early twentieth century:

The automobile squared perfectly with a distinctive American ideal of freedom—freedom of mobility. Always a restless nation, with complex migratory patterns throughout the 17th, 18th, and 19thcenturies, the car came just as a certain kind of mobility had reached an end with the closing of the frontier. But the restlessness had not ended, and the car allowed control of space like no other form of transportation. 25 25. Ted v. McAllister, Cars, Individualism, and the Paradox of Freedom in a Mass Society. ×

Individual car ownership has long been central to conceptions of property and economic status. The average American adult currently spends about ten percent of his or her income on an automobile, 26 26. Máté Petrány, This Is How Much Americans Spend on their Cars. × making it by far his or her most expensive item of personal property. The social costs of individual automobile ownership are far higher. 27 27. Edward Humes, The Absurd Primacy of the Automobile in American Life; Robert Moor, What Happens to the American Myth When You Take the Driver Out of It?. ×

The automobile’s run as an icon of social status through ownership may be ending. Futurists expect that the availability of on-demand automated vehicle service will complete the transition from mobility as personal property to mobility as a service, as more and more households stop buying cars and rely instead on ride sharing services. 28 28. Smart Cities and the Vehicle Ownership Shift. × Ride sharing companies like Uber and Lyft have long been on this case, and now automobile manufacturers are scrambling to market their vehicles as shared services. 29 29. Ryan Felton, GM Aims to Get Ahead of Everyone with Autonomous Ride-Sharing Service in Multiple Cities by 2019. × With the decline of individual ownership, what will happen to conceptions of property in the physical space of the automobile, in the contractual right to use a particular car or fleet of automobiles, and in the data generated about occupants and vehicles?

The coming transition from individual ownership to shared service will also raise important questions about the line between the public and private domains. At present, the “public sphere” is defined by mass transit whereas the individually owned automobile constitutes the “private sphere.” The public sphere operates according to ancient common carrier rules of universal access and non-discrimination, whereas a car is not quite “a man’s castle on wheels” for constitutional purposes, 30 30. See Illinois v. Lidster, 540 U.S. 419, 424 (2004) (“The Fourth Amendment does not treat a motorist’scaras hiscastle.”). × but still a non-public space dominated by individual rights as against the state rather than public obligations. 31 31. E.g., Byrne v. Rutledge, 623 F.3d 46 (2d Cir. 2010) (holding the motor vehicle license plates were nonpublic fora and that state’s ban on vanity plates referencing religious topic violated First Amendment). × As more and more vehicles are held and used in shared fleets rather than individual hands, the traditional line between publicly minded “mass transit” and individually minded vehicle ownership will come under pressure, with significant consequences for both efficiency and equality.

C. Platform Mobility, Competition, and Regulation

The coming transition toward ride sharing fleets rather than individual vehicle ownership described in the previous section will have additional important implications for the economic structure of mobility—which of course will raise important regulatory questions as well. At present, the private transportation system is highly atomistic. In the United States alone, there are 264 million individually owned motor vehicles in operation. 32 32. U.S. Dep’t of Energy, Transportation Energy Data Book, Chapter 8, Household Vehicles and Characteristics, Table 8.1, Population and Vehicle Profile, https://cta.ornl.gov/data/chapter8.shtml (last visited May 29, 2018). × For the reasons previously identified, expect many of these vehicles to shift toward corporate-owned fleets in coming years. The question then will be how many such fleets will operate—whether we will see robust fleet-to-fleet competition or instead the convergence toward a few dominant providers as we are seeing in other important areas of the “platform economy.”

There is every reason to believe that, before too long, mobility will tend in the direction of other monopoly or oligopoly platforms because it will share their economic structure. The key economic facts behind the rise of dominant platforms like Amazon, Twitter, Google, Facebook, Microsoft, and Apple are the presence of scale economies and network effects—system attributes that make the system more desirable for others users as new users join. 33 33. See generally DavidS.Evans& Richard Schmalensee, A Guide to the Antitrust Economics of Networks, Antitrust, Spring 1996, at 36; Michael L. Katz & Carl Shapiro, Systems Competition andNetworkEffects, 8 J. Econ. Persp. 93 (1994). × In the case of the mobility revolution, a number of features are suggestive of future scale economies and network effects. The more cars in a fleet, the more likely it is that one will be available when summoned by a user. The more cars connected to other cars in a fleet, the higher the quality of the information (on such topics as road and weather conditions and vehicle performance) available within the fleet and the steeper the machine learning curve.

As is true with other platforms, the mere presence of scale economic and network effects does not have to lead inexorably to market concentration or monopoly. Law and regulation may intervene to mitigate these effects, for example by requiring information sharing or interconnection among rival platforms. But such mandatory information sharing or interconnection obligations are not always advisable, as they can diminish a platform’s incentives to invest in its own infrastructure or otherwise impair incentives to compete.

Circling back to the “law of the horse” point raised at the outset, these issues are not, of course, unique to law and mobility. But this brief examination of these three topics—policing, ownership, and competition—shows the value of considering law and mobility as a distinct topic. Technological, legal, and regulatory decisions we make with respect to one particular set of problems will have implications for distinct problems perhaps not under consideration at that moment. For example, law and technology will operate conjunctively to define the bounds of privacy expectations in connected and automated vehicles, with implications for search and seizure law, property and data privacy norms, and sharing obligations to promote competition. Pulling a “privacy lever” in one context—say to safeguard against excessive police searches—could have spillover effects in another context, for example by bolstering a dominant mobility platform’s arguments against mandatory data sharing. Although the interactions between the different technological decisions and related legal norms are surely impossible to predict or manage with exactitude, consideration of law and mobility as a system will permit a holistic view of this complex, evolving ecosystem.

Conclusion

Law and regulation will be at the center of the coming mobility revolution. Many of the patterns we will observe at the intersection of law and the new technologies will be familiar—at least if we spend the time to study past technological revolutions—and general principles will be sufficient to answer many of the rising questions. At the same time, there is a benefit to considering the field of law and mobility comprehensively with an eye to understanding the often subtle interactions between discrete technological and legal decisions. The Journal of Law and Mobility aims to play an important role in this fast-moving space.


Frederick Paul Furth, Sr. Professor of Law, University of Michigan. I am grateful for helpful comments from Ellen Partridge and Bryant Walker Smith. All errors are my own.