September 2019

This is the much-delayed second part in a series of posts I started earlier this year. In that first post I discussed how companies are experimenting with small delivery robots that crawl along sidewalks to deliver goods right to your door. However, the sidewalk is not the only place where delivery drones may soon be found, as many companies are interested in using aerial drones to bring their products right to consumers.

In April, Wing, a division of Google parent company Alphabet, was given approval to start delivering goods via drone in Canberra, Australia. At launch, the drones were delivering food, medicine, and other products from 12 local businesses. This formal launch came after a trial period that ran for 18 months and 3,000 deliveries. Also in April, Wing received an FAA certification typically used for small airlines, as they begin to plan U.S. based tests, again with the intent to partner with local businesses. Not to be left behind, in June Amazon revealed it’s own delivery drone, which is indented to bring good directly from their warehouses to nearby customers within 30 minutes. Also in June, Uber announced a plan to partner with McDonalds to test delivery drones in San Diego. In Ohio, a partnership between the Air Force and the state government will allow drones to test outside of line-of-sight (a range that most civilian drones are currently limited to by the FAA). One company that intends to take part in the Ohio testing is VyrtX, which is looking to use drones to deliver human organs for transplant. 

But just what would wider use of such delivery drones mean for society? What would it mean to live in a world with robots buzzing around above our heads? In the Australian tests there were complaints about noise, with some residents claiming the sound of the machines caused them significant distress. In January of this year an unidentified drone shut down London’s Heathrow Airport, showing what can happen when drones wander into places they’re not welcome. In February of this year NASA announced two tests of “urban drone traffic management,” one in Texas, and the other in Nevada. Such a system would no doubt be necessary before widespread deployment of any of the systems so far proposed – to prevent incidents like the one in London.   

There is also a major privacy concern with drones collecting data as they fly above homes and businesses. This concern extends beyond just what privately owned drones may find, but also what law enforcement could collect. In Florida v. Riley, a 1988 case, the Supreme Court found that there is not reasonable expectation of privacy from aircraft (in that case, a police helicopter) flying in navigable airspace above a person’s home, when the air craft is flying within FAA regulations. So drones would provide a useful tool for investigations, and one that is limited only by FAA rules.

There are a lot of unanswered questions about delivery drones – and given the highly-regulated nature of all forms of air travel, the federal government, via the FAA, currently has a lot of power over just what can go on in U.S. airspace. What remains to be seen is if this regulatory structure will stifle drone development or instead insure that any market for delivery drones is developed deliberately, rather than ad hoc, with an emphasis on safety.

P.S. – A brief follow-up to my last article – Ford recently partnered with Agility Robotics on a new form of last mile delivery bot, a bipedal unit designed to carry up to 40 pounds. Could it become the C-3PO to the R2-D2-like bots already in testing?

Anyone currently living in a large city or an American college town has had some experiences with scooters – would that be the mere annoyance of having them zip around on sidewalks. Or, as a friend of mine did, attempt to use one without checking first where the throttle is…

Montréal, the economic and cultural capital of Québec province in Canada, has recently given temporary “test” licenses to micromobility scooters and bikes operators Bird, Lime and Jump, the latter two being owned by Google and Uber, respectively. 

Operations started late spring, among some skepticism from Montrealers. Not only in face of the strict regulations imposed by the city’s bylaw, but also the steep price of the services. As one article from the leading French language daily La Presse compares, a ride that takes slightly more than 20 minutes by foot would cost more than 4 Canadian dollars (about $3) with either Lime (scooters) or Jump (bikes), for a total ride time of 12 minutes. The subway and the existing dock-based bike-share service (BIXI) are cheaper, if not both cheaper and quicker. 

While Montréal’s young and active population segment can be understood as the perfect customer base for micromobility, its local government, like many others across the world who face a similar scooter invasion, really mean it with tough regulation. Closer to home, Ann Arbor banned Bird, Lyft and Lime earlier this spring for failure to cooperate; Nashville mayor attempted a blanket ban; Boulder is considering lifting its ban; several Californian cities are enforcing a strict geofencing policy; further away from the US, Amsterdam is also going to put cameras in place in order to better enforce its bikes-first regulation after having already handed out 3500 (!) individual fines over the course of a few months. As NPR reports, the trend is toward further tightening of scooter regulations across the board.

So is Montréal’s story any different? Not really. It faces the same chaotic parking situation as everywhere else, with misplaced scooters, found outside of their geofence or simply where they should not be. In its bylaw providing for the current test licenses, the city council came up with a new acronym: the unpronounceable VNILSSA, or DSUV in English. The English version stands for “dockless self-serve unimmatriculated vehicles”. The bylaw sets a high standard for operators: they are responsible for the proper parking of their scooters at all times. Not only can scooters only be parked in designated (and physically marked) parking areas, but the operator has two hours to deal with a misplaced scooter after receiving a complaint from the municipal government, with up to ten hours when such a complaint is made by a customer outside of business hours. In addition, customers must be 18 to ride and must wear a helmet. 

Tough regulations are nice, but are they even enforced? The wear-a-helmet part of the bylaw is the police’s task to enforce and there has not been much going on that front so far. As for the other parts, the city had been playing it cool, so far, giving a chance to the operators to adjust themselves. But that did not suffice: the mayor’s team recently announced the start of fining season, targeting both customers who misplace their scooter or bike if caught red-handed and the operators in other situations. The mayor’s thinly veiled expression of dissatisfaction earlier prompted Lime to send an email to all its customers, asking them in turn to email the mayor’s office with a pre-formatted letter praising the micromobility service. The test run was meant to last until mid-November, but it looks like may end early… The mobility director of the mayor’s team pledged that most of the data regarding complaints and their handling – data which operators must keep – would be published on the city’s open data portal at the end of the test run. 

If Chris Schafer, an executive at Lime Canada, believes that customers still need to be “educated” to innovative micro-mobility, Montréal’s story may prove once more that micromobility operators also need to be educated, when it comes to respecting the rules and consumers’ taste for responsible corporate behavior.

Back in January, I wrote about the auto industry’s growing sense that a set of nationwide regulatory standards was needed to govern automated vehicles (AVs). To date, twenty-nine states and Washington, DC have enacted AV-related legislation. A handful more have adopted Executive Orders or developed some other form of AV regulation. As the number of states with varying regulatory regimes continues to rise, the industry and some experts have grown concerned that the need to comply with a patchwork of disparate laws could hinder development of the industry.

Despite these concerns, and bipartisan support, the federal AV START Act died in the Senate at the close of 2018. After passing the House, a group of Senate Democrats became concerned that the bill focused too much on encouraging AV adoption at the expense of meaningful safety regulation. After the bill went down at the end of the year, the industry significantly reduced its lobbying efforts. This led some observers to conclude that the effort to pass AV START would not be renewed any time soon.

Never ones to let a good acronym go to waste, several members of Congress have begun work to revive the American Vision for Safer Transportation Through Advancement of Revolutionary Technologies (AV START) Act. Over the summer, a bipartisan group of lawmakers in both chambers held a series of meetings to discuss a new deal. Their hope is that, with Democrats now in control of the House, the safety concerns that stalled the bill in the upper chamber last winter will be assuaged earlier in the process.

Congress’ efforts, spearheaded by Senator Gary Peters (D-MI) appear to be making at least some headway. Both the House Committee on Energy and Commerce, and the Senate Committee on Commerce, Science and Transportation, sent letters to a variety of stakeholders requesting comments on a potential bill.

The legislature appears to be moving forward deliberately however, and to date no hearings on the subject have been scheduled in either the House or Senate. As Congress once again builds an effort to pass comprehensive AV legislation, this blog will be following and providing updates.

I previously blogged on automated emergency braking (AEB) standardization taking place at the World Forum for Harmonization of Vehicle Regulations (also known as WP.29), a UN working group tasked with managing a few international conventions on the topic, including the 1958 Agreement on wheeled vehicles standards.

It turns out the World Forum recently published the result of a joint effort undertaken by the EU, US, China, and Japan regarding AV safety. Titled Revised Framework document on automated/autonomous vehicles, its purpose is to “provide guidance” regarding “key principles” of AV safety, in addition to setting the agenda for the various subcommittees of the Forum.

One may first wonder what China and the US are doing there, as they are not members to the 1958 Agreement. It turns out that participation in the World Forum is open to everyone (at the UN), regardless of membership in the Agreement. China and the US are thus given the opportunity to influence the adoption of one standard over the other through participation in the Forum and its sub-working groups, without being bound if the outcome is not to their liking in the end. Peachy!

International lawyers know that every word counts, and every word can be assumed to have been negotiated down to the comma, or so it is safe to assume. Using that kind of close textual analysis, what stands out in this otherwise terse UN prose? First, the only sentence couched in mandatory terms. Setting out the drafters’ “safety vision,” it goes as follows: AVs “shall not cause any non-tolerable risk, meaning . . . shall not cause any traffic accidents resulting in injury or death that are reasonably foreseeable and preventable.”

This sets the bar very high in terms of AV behavioral standard, markedly higher than for human drivers. We cause plenty of accidents which would be “reasonably foreseeable and preventable.” A large part of accidents are probably the result of human error, distraction, or recklessness, all things “foreseeable” and “preventable.” Nevertheless, we are allowed to drive and are insurable (except in the most egregious cases…) Whether this is a good standard for AVs can be discussed, but what is certain is that it reflects the general idea that we as humans hold machines to a much higher “standard of behavior” than other humans; we forgive other humans for their mistakes, but machines ought to be perfect – or almost so.

In second position: AVs “should ensure compliance with road traffic regulations.” This is striking by its simplicity, and I suppose that the whole discussion on how the law and its enforcement are actually rather flexible (such as the kind of discussion this very journal hosted last year in Ann Arbor) has not reached Geneva yet. As it can be seen in the report on this conference, one cannot just ask AVs to “comply” with the law; there is much more to it.

In third position: AV’s “should allow interaction with the other road users (e.g. by means of external human machine interface on operational status of the vehicle, etc.)” Hold on! Turns out this was a topic at last year’s Problem-Solving Initiative hosted by University of Michigan Law School, and we concluded that this was actually a bad idea. Why? First, people need to understand whatever “message” is sent by such an interface. Language may come in the way. Then, the word interaction suggests some form of control by the other road user. Think of a hand signal to get the right of way from an AV; living in a college town, it is not difficult to imagine how would such “responsive” AVs could wreak havoc in areas with plenty of “other road users,” on their feet or zipping around on scooters… Our conclusion was that the AV could send simple light signals to indicate its systems have “noticed” a crossing pedestrian for example, without any additional control mechanisms begin given to the pedestrian. Obviously, jaywalking in front on an AV would still result in the AV breaking… and maybe sending angry light signals or honking just like a human driver would do.

Finally: cybersecurity and system updates. Oof! Cybersecurity issues of IoT devices is an evergreen source of memes and mockery, windows to a quirky dystopian future where software updates (or lack thereof) would prevent one from turning the lights on, flushing the toilet, or getting out of the house… or where a botnet of connected wine bottles sends DDoS attacks across the web’s vast expanse. What about a software update while getting on a crowded highway from an entry ramp? In that regard, the language of those sections seems rather meek, simply quoting the need for respecting “established” cybersecurity “best practices” and ensuring system updates “in a safe and secured way…” I don’t know what cybersecurity best practices are, but looking at the constant stream of IT industry leaders caught in various cybersecurity scandals, I have some doubts. If there is one area where actual standards are badly needed, it is in consumer-facing connected objects.

All in all, is this just yet another useless piece of paper produced by an equally useless international organization? If one is looking for raw power, probably. But there is more to it: the interest of such a document is that it reflects the lowest common denominator among countries with diverging interests. The fact that they agree on something, (or maybe nothing) can be a vital piece of information. If I were an OEM or policy maker, it is certainly something I would be monitoring with due care.

Cite as: Raphael Beauregard-Lacroix, (Re)Writing the Rules of The Road: Reflections from the Journal of Law and Mobility’s 2019 Conference, 2019 J. L. & Mob. 97.

On March 15th, 2019, the Journal of Law and Mobility, part of the University of Michigan’s Law and Mobility Program, presented its inaugural conference, entitled “(Re)Writing the Rules of The Road.” The conference was focused on issues surrounding the relationship between automated vehicles (“AVs”) and the law. In the afternoon, two panels of experts from academia, government, industry, and civil society were brought together to discuss how traffic laws should apply to automated driving and the legal person (if any) who should be responsible for traffic law violations. The afternoon’s events occurred under a modified version of the Chatham House Rule, to allow the participants to speak more freely. In the interest of allowing those who did not attend to still benefit from the day’s discussion, the following document was prepared. This document is a summary of the two panels, and an effort has been made to de-identify the speaker while retaining the information conveyed. 

Panel I: Crossing the Double Yellow Line: Should Automated Vehicles Always Follow the Rules of the Road as Written?

The first panel focused on whether automated vehicles should be designed to strictly follow the rules of the road. Questions included – How should these vehicles reconcile conflicts between those rules? Are there meaningful differences between acts such as exceeding the posted speed limit to keep up with the flow of traffic, crossing a double yellow line to give more room to a bicyclist, or driving through a stop sign at the direction of a police officer? If flexibility and discretion are appropriate, how can this be reflected in law? 

Within the panel, there was an overall agreement that we need both flexibility in making the law, and flexibility in the law itself among the participants. It was agreed that rigidity, both on the side of the technology as well as on the side of norms, would not serve AVs well. The debate was focused over just how much flexibility there should be and how this flexibility can be formulated in the law.

One type of flexibility that already exists is legal standards. One participant emphasized that the law is not the monolith it may seem from the outside – following a single rule, like not crossing a double yellow line, is not the end of an individual’s interaction with the law. There are a host of different laws applying to different situations, and many of these laws are formulated as standards – for example, the standard that a person operating a vehicle drives with “due care and attention.” Such an approach to the law may change the reasoning of a judge when it would come to determining liability for an accident involving an AV. 

When we ask if AVs should always follow the law, our intuitive reaction is of course they should. Yet, some reflection may allow one to conclude that such strict programming might not be realistic. After all, human drivers routinely break the law. Moreover, most of the participants explicitly agreed that as humans, we get to choose to break the law, sometimes in a reasonable way, and we get to benefit from the discretion of law enforcement. 

That, however, does not necessarily translate to the world of AVs, where engineers make decisions about code and where enforcement can be automatized to a high degree, both ex ante and ex post. Moreover, such flexibilities in the law needs to be tailored to the specific social need; speeding is a “freedom” we enjoy with our own, personal legacy cars, and this type of law breaking does not fulfill the same social function as a driver being allowed to get on the sidewalk in order to avoid an accident. 

One participant suggested that in order to reduce frustrating interactions with AVs, and to overall foster greater safety, AVs need the flexibility not to follow the letter of the law in some situations. Looking to the specific example of the shuttles running on the University of Michigan’s North Campus – those vehicles are very strict in their compliance with the law. 1 1. Susan Carney, Mcity Driverless Shuttle launches on U-M’s North Campus, The Michigan Engineer (June 4, 2018), https://news.engin.umich.edu/2018/06/mcity-driverless-shuttle-launches-on-u-ms-north-campus/. × They travel slowly, to the extent that their behavior can annoy human drivers. When similar shuttles from the French company Navya were deployed in Las Vegas, 2 2. Paul Comfort, U.S. cities building on Las Vegas’ success with autonomous buses, Axios (Sept. 14, 2018), https://www.axios.com/us-cities-building-on-las-vegas-success-with-autonomous-buses-ce6b3d43-c5a3-4b39-a47b-2abde77eec4c.html. × there was an accident on the very first run. 3 3. Sean O’Kane, Self-driving shuttle crashed in Las Vegas because manual controls were locked away, The Verge (July 11, 2019, 5:32 PM), https://www.theverge.com/2019/7/11/20690793/self-driving-shuttle-crash-las-vegas-manual-controls-locked-away. × A car backed into the shuttle, and when a normal driver would have gotten out of the way, the shuttle did not.

One answer is that we will know it when we see it; or that solutions will emerge out of usage. However, many industry players do not favor such a risk-taking strategy. Indeed, it was argued that smaller players in the AV industry would not be able to keep up if those with deeper pockets decide to go the risky way. 

Another approach to the question is to ask what kind of goals should we be applying to AVs? A strict abidance to legal rules or mitigating harm? Maximizing safety? There are indications of some form of international consensus 4 4. UN resolution paves way for mass use of driverless cars, UN News (Oct. 10, 2018), https://news.un.org/en/story/2018/10/1022812. × (namely in the form of a UN Resolution) 5 5. UN Economic Commission for Europe, Revised draft resolution on the deployment of highly and fully automated vehicles in road traffic (July, 12, 2018), https://www.unece.org/fileadmin/DAM/trans/doc/2018/wp1/ECE-TRANS-WP.1-2018-4-Rev_2e.pdf × that the goal should not be strict abidance to the law, and that other road users may commit errors, which would then put the AV into a situation of deciding between strict legality and safety or harm. 

In Singapore, the government recently published “Technical Reference 68,” which sets up a hierarchy of rules, such as safety, traffic flow, and with the general principle of minimizing rule breaking. This example shows that principles can act as a sense-check. That being said, the technical question of how to “code” the flexibility of a standard into AV software was not entirely answered. 

Some participants also reminded the audience that human drivers do not have to “declare their intentions” before breaking the law, while AV software developers would have to. Should they be punished for that in advance? Moreover, non-compliance with the law – such as municipal ordinances on parking – is the daily routine for certain business models such as those who rely on delivery. Yet, there is no widespread condemnation of that, and most of us enjoy having consumer goods delivered at home.

More generally, as one participant asked, if a person can reasonably decide to break the law as a driver, does that mean the developer or programmer of AV software can decide to break the law in a similar way and face liability later? Perhaps the answer is to turn the question around – change the law to better reflect the driving environment so AVs don’t have to be programmed to break it. 

Beyond flexibility, participants discussed how having multiple motor vehicle codes – in effect one per US State – makes toeing the line of the law difficult. One participant highlighted that having the software of an AV validated by one state is big enough a hurdle, and that more than a handful of such validations processes would be completely unreasonable for an AV developer. Having a single standard was identified as a positive step, while some conceded that states also serve the useful purpose of “incubating” various legal formulations and strategies, allowing in due time the federal government to “pick” the best one. 

Panel II: Who Gets the Ticket? Who or What is the Legal Driver, and How Should Law Be Enforced Against Them?

The second panel looked at who or what should decide whether an automated vehicle should violate a traffic law, and who or what should be responsible for that violation. Further questions included – Are there meaningful differences among laws about driving behavior, laws about vehicle maintenance, and laws and post-crash responsibilities? How should these laws be enforced? What are the respective roles for local, state, and national authorities?

The participants discussed several initiatives, both public and private, that aimed at defining, or helping define the notion of driver in the context of AVs. The Uniform Law Commission worked on the “ADP”, or “automated driving provider”, which would replace the human driver as the entity responsible in case of an accident. The latest report from the RAND Corporation highlighted that the ownership model of AVs will be different, as whole fleets will be owned and maintained by OEMs (“original equipment manufacturers”) or other types of businesses and that most likely these fleet operators would be the drivers.

Insurance was also identified as a matter to take into consideration in the shaping up of the notion of AV driver. As of the date of the conference, AVs are only insured outside of state-sponsored guarantee funds, which aim to cover policy holders in case of bankruptcy of the insurer. Such “non-admitted” insurance means that most insurers will simply refuse to insure AVs. Who gets to be the driver in the end may have repercussions on whether AVs become insurable or not. 

In addition, certain participants stressed the importance of having legally recognizable persons bear the responsibility – the idea that “software” may be held liable was largely rejected by the audience. There should also be only one such person, not several, if one wants to make it manageable from the perspective of the states’ motor vehicle codes. In addition, from a more purposive perspective, one would want the person liable for the “conduct” of the car to be able to effectuate required changes so to minimize the liability, through technical improvements for example. That being said, such persons will only accept to shoulder liability if costs can be reasonably estimated. It was recognized by participants that humans tend to trust other humans more than machines or software, and are more likely to “forgive” humans for their mistakes, or trust persons who, objectively speaking, should not be trusted.

Another way forward identified by participants is product liability law, whereby AVs would be understood as a consumer good like any other. The question then becomes one of apportionment of liability, which may be rather complex, as the experience of the Navya shuttle crash in Las Vegas has shown. 

Conclusion

The key takeaway from the two panels is that AV technology now stands at a crossroads, with key decisions being taken as we discuss by large industry players, national governments and industry bodies. As these decisions will have an impact down the road, all participants and panelists agreed that the “go fast and break things” approach will not lead to optimal outcomes. Specifically, one line of force that comes out from the two panels is the idea that it is humans who stand behind the technology, humans who take the key decisions, and also humans who will accept or reject commercially-deployed AVs, as passengers and road users. As humans, we live our daily lives, which for most of us include using roads under various capacities, in a densely codified environment. However, this code, unlike computer code, is in part unwritten, flexible and subject to contextualization. Moreover, we sometimes forgive each others’ mistakes. We often think of the technical challenges of AVs in terms of sensors, cameras and machine learning. Yet, the greatest technical challenge of all may be to express all the flexibility of our social and legal rules into unforgivably rigid programming language.