Re(Writing) the Rules of The Road: Reflections from the Journal of Law and Mobility’s 2019 Conference

On March 15th, 2019, the Journal of Law and Mobility, part of the University of Michigan’s Law and Mobility Program, presented its inaugural conference, entitled “(Re)Writing the Rules of The Road.” The conference was focused on issues surrounding the relationship between automated vehicles (“AVs”) and the law. In the afternoon, two panels of experts from academia, government, industry, and civil society were brought together to discuss how traffic laws should apply to automated driving and the legal person (if any) who should be responsible for traffic law violations. The afternoon’s events occurred under a modified version of the Chatham House Rule, to allow the participants to speak more freely. In the interest of allowing those who did not attend to still benefit from the day’s discussion, the following document was prepared. This document is a summary of the two panels, and an effort has been made to de-identify the speaker while retaining the information conveyed.

Panel I: Crossing the Double Yellow Line: Should Automated Vehicles Always Follow the Rules of the Road as Written?

This panel focused on whether automated vehicles should be designed to strictly follow the rules of the road. Questions included – How should these vehicles reconcile conflicts between those rules? Are there meaningful differences between acts such as exceeding the posted speed limit to keep up with the flow of traffic, crossing a double yellow line to give more room to a bicyclist, or driving through a stop sign at the direction of a police officer? If flexibility and discretion are appropriate, how can this be reflected in law?

Within the panel, there was an overall agreement that we need both flexibility in making the law, and flexibility in the law itself among the participants. It was agreed that rigidity, both on the side of the technology as well as on the side of norms, would not serve AVs well. The debate was focused over just how much flexibility there should be and how this flexibility can be formulated in the law.

One type of flexibility that already exists is legal standards. One participant emphasized that the law is not the monolith it may seem from the outside – following a single rule, like not crossing a double yellow line, is not the end of an individual’s interaction with the law. There are a host of different laws applying to different situations, and many of these laws are formulated as standards – for example, the standard that a person operating a vehicle drives with “due care and attention.” Such an approach to the law may change the reasoning of a judge when it would come to determining liability for an accident involving an AV.

When we ask if AVs should always follow the law, our intuitive reaction is of course they should. Yet, some reflection may allow one to conclude that such strict programming might not be realistic. After all, human drivers routinely break the law. Moreover, most of the participants explicitly agreed that as humans, we get to choose to break the law, sometimes in a reasonable way, and we get to benefit from the discretion of law enforcement.

That, however, does not necessarily translate to the world of AVs, where engineers make decisions about code and where enforcement can be automatized to a high degree, both ex ante and ex post. Moreover, such flexibilities in the law needs to be tailored to the specific social need; speeding is a “freedom” we enjoy with our own, personal legacy cars, and this type of law breaking does not fulfill the same social function as a driver being allowed to get on the sidewalk in order to avoid an accident.

One participant suggested that in order to reduce frustrating interactions with AVs, and to foster greater safety, AVs need the flexibility not to follow the letter of the law in some situations. Looking to the specific example of the shuttles running on the University of Michigan’s North Campus – those vehicles are very strict in their compliance with the law.1Susan Carney, Mcity Driverless Shuttle launches on U-M’s North Campus, The Michigan Engineer (June 4, 2018), https://news.engin.umich.edu/2018/06/mcity-driverless-shuttle-launches-on-u-ms-north-campus/. They travel slowly, to the extent that their behavior can annoy human drivers. When similar shuttles from the French company Navya were deployed in Las Vegas,2Paul Comfort, U.S. Cities Building on Las Vegas’ Success With Autonomous Buses, Axios (Sept. 14, 2018), https://www.axios.com/us-cities-building-on-las-vegas-success-with-autonomous-buses-ce6b3d43-c5a3-4b39-a47b-2abde77eec4c.html. there was an accident on the very first run.3Sean O’Kane, Self-driving shuttle crashed in Las Vegas because manual controls were locked away, The Verge (July 11, 2019, 5:32 PM), https://www.theverge.com/201 9/7/11/20690793/self-driving-shuttle-crash-las-vegas-manual-controls-locked-away. A car backed into the shuttle, and when a normal driver would have gotten out of the way, the shuttle did not.

One answer is that we will know it when we see it; or that solutions will emerge out of usage. However, many industry players do not favor such a risk-taking strategy. Indeed, it was argued that smaller players in the AV industry would not be able to keep up if those with deeper pockets decide to go the risky way.

Another approach to the question is to ask what kind of goals should we be applying to AVs? A strict abidance to legal rules or mitigating harm? Maximizing safety? There are indications of some form of international consensus4UN resolution paves way for mass use of driverless cars, UN News (Oct. 10, 2018), https://news.un.org/en/story/2018/10/1022812. (namely in the form of a UN Resolution)5UN Economic Commission for Europe, Revised Draft Resolution on the Deployment of Highly and Fully Automated Vehicles in Road Traffic (July, 12, 2018), https://www.unece.org/fileadmin/DAM/trans/doc/2018/wp1/ECE-TRANS-WP.1-2018-4-Rev_2e.pdf. that the goal should not be strict abidance to the law, and that other road users may commit errors, which would then put the AV into a situation of deciding between strict legality and safety or harm.

In Singapore, the government recently published “Technical Reference 68,”6Joint Media Release, Land Transport Authority, Enterprise Singapore, Standards Development Organization, & Singapore Standards Council, Land Transport Authority (Jan. 31, 2019), https://www.lta.gov.sg/apps/news/page.aspx?c=2&id=8ea0 2b69-4505-45ff-8dca-7b094a7954f9. which sets up a hierarchy of rules, such as safety, traffic flow, and with the general principle of minimizing rule breaking. This example shows that principles can act as a sense-check. That being said, the technical question of how to “code” the flexibility of a standard into AV software was not entirely answered.

Some participants also reminded the audience that human drivers do not have to “declare their intentions” before breaking the law, while AV software developers would have to. Should they be punished for that in advance? Moreover, non-compliance with the law – such as municipal ordinances on parking – is the daily routine for certain business models such as those who rely on delivery. Yet, there is no widespread condemnation of that, and most of us enjoy having consumer goods delivered at home.

More generally, as one participant asked, if a person can reasonably decide to break the law as a driver, does that mean the developer or programmer of AV software can decide to break the law in a similar way and face liability later? Perhaps the answer is to turn the question around – change the law to better reflect the driving environment so AVs don’t have to be programmed to break it.

Beyond flexibility, participants discussed how having multiple motor vehicle codes – in effect one per US State – makes toeing the line of the law difficult. One participant highlighted that having the software of an AV validated by one state is big enough a hurdle, and that more than a handful of such validations processes would be completely unreasonable for an AV developer. Having a single standard was identified as a positive step, while some conceded that states also serve the useful purpose of “incubating” various legal formulations and strategies, allowing in due time the federal government to “pick” the best one.

Panel II: Who Gets the Ticket? Who or What is the Legal Driver, and How Should Law Be Enforced Against Them?

The second panel looked at who or what should decide whether an automated vehicle should violate a traffic law, and who or what should be responsible for that violation. Further questions included – Are there meaningful differences among laws about driving behavior, laws about vehicle maintenance, and laws and post-crash responsibilities? How should these laws be enforced? What are the respective roles for local, state, and national authorities?

The participants discussed several initiatives, both public and private, that aimed at defining, or helping define the notion of driver in the context of AVs. The Uniform Law Commission worked on the “ADP”, or “automated driving provider”, which would replace the human driver as the entity responsible in case of an accident. The latest report from the RAND Corporation highlighted that the ownership model of AVs will be different, as whole fleets will be owned and maintained by OEMs or other types of businesses and that most likely these fleet operators would be the drivers.7James M. Anderson, et. al., Rethinking Insurance and Liability in the Transformative Age of Autonomous Vehicles (2018), https://www.rand.org/conte nt/dam/rand/pubs/conf_proceedings/CF300/CF383/RAND_CF383.pdf.

Insurance was also identified as a matter to take into consideration in the shaping up of the notion of AV driver. As of the date of the conference, AVs are only insured outside of state-sponsored guarantee funds, which aim to cover policy holders in case of bankruptcy of the insurer. Such “non-admitted” insurance means that most insurers will simply refuse to insure AVs. Who gets to be the driver in the end may have repercussions on whether AVs become insurable or not.

In addition, certain participants stressed the importance of having legally recognizable persons bear the responsibility – the idea that “software” may be held liable was largely rejected by the audience. There should also be only one such person, not several, if one wants to make it manageable from the perspective of the states’ motor vehicle codes. In addition, from a more purposive perspective, one would want the person liable for the “conduct” of the car to be able to effectuate required changes so to minimize the liability, through technical improvements for example. That being said, such persons will only accept to shoulder liability if costs can be reasonably estimated. It was recognized by participants that humans tend to trust other humans more than machines, or software, and are more likely to “forgive” humans for their mistakes, or trust persons who, objectively speaking, should not be trusted.

Another way forward identified by participants is product liability law, whereby AVs would be understood as a consumer good like any other. The question then becomes one of apportionment of liability, which may be rather complex, as the experience of the Navya shuttle crash in Las Vegas has shown.

Conclusion:

The key takeaway from the two panels is that AV technology now stands at a crossroads, with key decisions being taken as we discuss by large industry players, national governments and industry bodies. As these decisions will have an impact down the road, all participants and panelists agreed that the “go fast and break things” approach will not lead to optimal outcomes. Specifically, one line of force that comes out from the two panels is the idea that it is humans who stand behind the technology, humans who take the key decisions, and also humans who will accept or reject commercially-deployed AVs, as passengers and road users. As humans, we live our daily lives, which for most of us include using roads under various capacities, in a densely codified environment. However, this code, unlike computer code, is in part unwritten, flexible and subject to contextualization. Moreover, we sometimes forgive each others’ mistakes. We often think of the technical challenges of AVs in terms of sensors, cameras and machine learning. Yet, the greatest technical challenge of all may be to express all the flexibility of our social and legal rules into unforgivably rigid programming language.


Leave a Reply

Your email address will not be published. Required fields are marked *