Up to now, the way forward for roadways-based, commercial automated mobility remained somewhat of a mystery. Surely, we would not see AVs in the hand of individual owners anytime soon – too expensive. “Robotaxi” fleets commanded by the likes of Uber and Lyft seemed the most plausible option. There was, at least in appearance, a business case and that most industry players seemed to be putting their efforts towards an automated version of common passenger cars.

Over the course of 2019, the landscape slowly but steadily changed: public authorities started to worry more about safety and the prospects of seeing fleets of “robotaxis” beyond the roads of Arizona, Nevada or California seemed remote. This is how automated shuttles found their way to the front of the race towards a viable business model and a large-scale commercial deployment.

Many now mock these slow-moving “bread loafs,” ridiculing their low speed and unenviable looks. However, some of these comments appear slightly disingenuous. The point of the shuttles is not “to persuade people to abandon traditional cars with steering wheels and the freedom to ride solo.” I don’t see any of these shuttles driving me back home to Montreal from Ann Arbor (a 600 miles/1000km straight line). But I see them strolling around campuses or across airport terminals. The kind of places where I don’t quite care about the good looks of whatever is carrying me around, and also the kind of place where I wouldn’t take my car to anyway. There might be much to say about how certain electric vehicles marketed directly to the end-user failed because of their unappealing design, but I don’t plan to buy a shuttle anytime soon.

Looks aside, these automated “turtles” have a major upside that the “hare” of, say, Tesla (looking at you, Model 3!) may not dispose of. Something which happens to be at the top of the agenda these days: safety. While notoriously hard to define in the automated mobility context (what does safety actually imply? When would an AV be safe?) removing speed from the equation immediately takes us into a safer territory; public authorities become less concerned, and more collaborative, agreeing to fund early deployment projects. Conversely, scooters irked a lot of municipal governments because they go too fast (among other things). As a result, there was little public appetite for scooters and operators were forced to withdraw, losing their license or failing to become commercially viable.

As a result, it is the safe vein that various industry players decided to tap. Our turtles are indeed slow, with a top speed of 25mph, usually staying in the range of 15 to 20 mph. This is no surprise: that is the speed after which braking means moving forward several dozen if not hundreds of feet. Within that lower bracket, however, a vehicle can stop in a distance of about two cars (not counting reaction time) and avoid transforming a collision into a fatality. Hence, it goes without saying that such shuttles are only suitable for local transportation. But why phrase that as an only? Local transportation is equally important. Such shuttles are also suitable for pedestrian environments. Outside of the US, pedestrians have their place on the road – and many, many roads, across the globe, are mostly pedestrian. Finally, they can also be usefully deployed in certain closed environments, notably airports. In many places, however, deployment of such shuttles on roadways might require some additional work – creation of lanes or changes to existing lanes – in order to accommodate their presence. Yet the same observation can also be made for “robotaxis,” however, and the adaptations required there may be much more substantial. The limited applications of automated shuttles may be what, ultimately, makes them less appealing than our Tesla Model 3 and its promises of freedom.

Overall, turtle shuttles appear closer to a marginal development from widely used rail-based automated driving systems, rather than a paradigm shift. That might precisely be what makes them a good gateway towards more automation in our mobility systems; there is wisdom in believing that we will have a better grasp of the challenges of automated mobility by actually deploying and using such systems, but it is not written anywhere that we need to break things to do so.

Several major OEMs have recently announced scaling back of their shared or automated mobility ventures. Ford and Volkswagen are giving up investments in “robotaxis” – the CEO of their software partner, Argo, was quoted saying he “hates the word” anyway – and similar services operated by German automakers are withdrawing from various markets or shutting down altogether, after overextending themselves during the last 18 months.

Two separate trends seem to contribute to that movement. The first one, car ownership is still growing worldwide, albeit modestly – roughly 1% per year over the last ten years in Germany, for example – while sales of new cars is slumping. It is important to differentiate these two: while new car sales affect the revenues of OEMs, and may indicate changes in consumption patterns, car ownership rates indicate people’s attitude vis-à-vis car ownership better. In that sense, we see a continued attachment to personal car ownership, a cultural phenomenon that is much more difficult to displace or even disrupt than what some may have thought previously. Hence, the dreaded “peak car” that will relegate the iconic 20th century consumer good to museums may not materialize for a while.

The second trend has to do with an observation made time and again: OEMs are not naturally good at running mobility services: their business is making cars. As one bank analyst put it, no one expects Airbus or Boeing to run an airline. Why should it be any different with car OEMs? Thinking about the prospects of automation, it became commonplace for large industrial players to partner with specialized software developers to develop the automated driving system. That may result in a great product, but it does not give create a market and a business plan when it comes to the AVs themselves. As it turned out, the main business plan, which was to use these cars as part of large car-sharing services or sell them to existing mobility operators, ran into a some roadblocks: OEMs found themselves competing with already existing mobility operators in a difficult market; and putting an AV safely on the road is a much more daunting task than once thought. As 2019 comes to a close, we have yet to see an actual commercial “robotaxi” deployment outside of test runs.

This second trend puts a large question mark on the short and medium term financial viability of investments in “robotaxis” and automated mobility operations, generally. OEMs and their partners, looking for ways to put all those vehicle automation efforts to profitable use, look at other markets, such as heavy, non-passenger road and industrial vehicles. Nevertheless, no one seems poised to completely exit the automated passenger mobility market; they all keep a foot in the door, continuing their tests and “gathering more data,” in order to allegedly understand the mobility needs of road users. Beyond these noble intentions, however, there is an exit plan: if all else fails, they can monetize their data sets to data hungry software developers.

In the end, this comes back to a point frequently addressed on this blog, that of safety. Technological advances in automation (broadly speaking) are bringing increased safety to existing cars, and they will continue to do so. We might have become overly fixated by the golden goose of the “Level 5” robotaxi (or even Level 3), which may or may not come in the next ten years, neglecting the low-hanging fruit. While laugh at our ancestors dreaming about flying cars for the year 2000, our future selves scoff at us for chasing robotaxis by 2020.

On November 19, the NTSB held a public board meeting on the 2018 Uber accident in Tempe, Arizona, involving an “automated” (actually level 3) Uber-operated Volvo SUV. One woman, Elaine Herzberg, a pedestrian, died in the accident. In the wake of the report, it is now a good time to come back to level 3 cars and the question of “safety drivers.”

Given that the purpose of the meeting was to put the blame on someone, media outlets were quick to pick up a culprit for their headlines: the “safety driver” who kept looking at her phone? The sensors who detected all kinds of stuff but never a person? Uber, who deactivated the OEM’s emergency braking? Or maybe, Uber’s “safety culture”? A whole industry’s?

The Board actually blames all of them, steering clear of singling out one event or actor. It is probably the safest and most reasonable course of action for a regulator, and it has relevant implications for how law enforcement will handle accidents involving AVs in the future. But because we are humans, we may stick more strongly with the human part of the story, that of the safety driver.

She was allegedly looking at her phone, “watching TV” as one article put it; following the latest episode of The Voice. The Board determined that she looked at the road one second before the impact. That is short, but under more normal circumstances, enough to smash the brakes. Maybe her foot was far from the pedal; maybe she just did not react because she was not in an “aware” state of mind (“automation complacency,” the report calls it). In any case, it was her job to look on the road, and she was violating Uber’s policy by using her phone while working as a safety driver.

At the time of the accident, the Tempe police released footage from the dash cam, a few seconds up to the impact, showing a poorly-lit street. The relevance of this footage was then disputed in an Ars Technica article which aims to demonstrate how actually well lit the street is, and how just the front lights of the car should have made the victim visible on time. Yet, I think it is too easy to put the blame on the safety driver. She was not doing her job, but what kind of job was it? Humans drive reasonably well, but that’s when we’re actually driving, not sitting in the driver seat with nothing else to do but to wait for something to jump out of the roadside. Even if she had been paying attention, injury was reasonably foreseeable. And even if she would have been driving in broad daylight, there remains a more fundamental problem besides safety driver distraction.

The [NTSB] also found that Uber’s autonomous vehicles were not properly programmed to react to pedestrians crossing the street outside of designated crosswalksone article writes. I find that finding somewhat more appalling than that of a safety driver being distracted. Call that human bias; still I do not expect machines to be perfect. But what this tells us is that stricter monitoring of cellphone usage of safety drivers will not cut it either, if the sensors keep failing. The sensors need to be able to handle this kind of situation. A car whose sensors cannot recognize a slowly crossing pedestrian (anywhere, even in the middle of the highway) does not have its place on a 45-mph road, period.

If there is one thing this accident has shown, it is that “safety drivers” add little to the safety of AVs. It’s a coin flip: the reactivity and skill of the driver makes up for the sensor failure; in other cases, a distracted, “complacent” driver (for any reason, phone or other) does not make up for the sensor failure. It is safe to say that the overall effect on safety is at best neutral. And even worse: it may provide a false sense of safety to the operator, as it apparently did here. This, in turn, prompts us to think about level 3 altogether.

While Uber has stated that it has “significantly improved its safety culture” since the accident, the question of the overall safety of these level 3 cars remains. And beyond everything Uber can do, one may wonder if such accidents are not bound to repeat themselves should level 3 cars see mass commercial deployments. Humans are not reliable “safety drivers.” And in a scenario that involves such drivers, it takes much less than the deadly laundry list of failures we had here to have such an accident happen. Being complacent may also mean that your foot is not close to the pedals, or that your hands are not “hovering above the steering wheel” as they should (apparently) be. That half second extra it takes to smash the brakes or grip the wheel is time enough to transform serious injury into death.

The paramount error here was to integrate a human, a person Uber should have known would be distracted or less responsive than an average driver, as a final safety for sensor failure. Not a long time ago, many industry players were concerned about early standardization. Now that some companies are out there, going fast and literally breaking people (not even things, mind you!), time has come to seriously discuss safety and testing standards, at the US federal and, why not, international level.

A University of Michigan Law School Problem Solving Initiative class on AV standardization will take place during the Winter semester of 2020, with deliverables in April. Stay tuned!

In a recent article published on Reuters Regulatory Intelligence, a DC-area lawyer said the following regarding the potential of implementing no-fault insurance “to” automated vehicles:

“Drivers have an inherent incentive to drive safely, so as not to be injured or killed on the roadways. That inherent incentive is what mitigates the “moral hazard” of a no-fault system. But in a no-fault model for autonomous vehicles, the incentives toward safety would be degraded given that manufacturers do not suffer the physical consequences of unsafe operation, as do drivers.”

Intuitively, this seems right. Yet, I thought: is there more to it? What does a world with an AV no-fault insurance scheme would look like?

This might be puzzling at first: what does one mean with no-fault AV insurance? In a more standard setting, a no-fault insurance system means that one gets the benefits of their own insurance without regard to the actual “fault” (such as negligence) and that civil suits on the basis of one’s fault are banned or severely restricted. No-fault systems are straightforward and predictable, although potentially less “just,” to the extent that a negligent driver may get away with nothing more than a deductible to pay or eventually a higher premium.

There was, and there still is, a good policy ground behind no-fault systems around car accidents: avoiding the social cost of civil litigation, and shifting the financial cost of such litigation towards the insurers, between whom things are more often than not settled out of court. Those we want to protect with no-fault insurance schemes are drivers (and passengers), and that is a majority of the population.

Now let’s consider AVs. Who do we want to protect? Passengers, for sure. But drivers? There is no driver! Or rather, there are many drivers. To some extent, at least under a layman’s understanding of the term “driver,” all the actors along the supply chain are driving the AV. Or, to be more precise, it is difficult to pinpoint a single driver: the “operator”? The software designer? And that is already assuming that there is a single entity who designed the software or operates what may be a fleet of AVs. And there may be others, as the AV industry continues to evolve; we can already see that various paths are taken by industry players, some acting for a form of vertical integration, others relying on a variety of suppliers, in a less streamlined way.

Do these all these industry players deserve extra protection? They are all corporate entities after all, and as the lawyer mentions in the above article, none of them are subject to physical injury in case of an accident. While expensive litigation can drive corporations to the ground, the case for shifting costs to insurers, when it comes to AV drivers, appears less clear. What is clear, though, is that human victims of an accident involving an AV ought to be as protected as if the accident did not involve an AV, and maybe even more.

The final answer will come from lawmakers. Moreover, one should not forget that no-fault insurance is mandatory only in a minority of US states, despite being prevalent in the rest of the world. Yet I believe there might be a case here to adopt a legal scheme which would both guarantee a litigation-free recourse to human accident victims, potentially in the form of an industry-funded guarantee fund, while giving the opportunity to the various players along the supply chain to fight it out, in court if need be, on the basis of the reality of their involvement in the cause of the accident; they are all sophisticated players after all and all share in the benefits of the risk they create. The stories of human victims, though, are what may “kill” the industry, if not enough care is taken to ensure a high level of legal protection.

In 2015, Google’s parent, Alphabet, decided the time was ripe for establishing a subsidiary in charge of investing in “smart infrastructure” projects – from waste to transport and energy. Its aim was specifically to implement such projects, transforming our urban landscape into a realm of dynamic and connected infrastructure pieces. Fast forward two years, and Sidewalk Labs had become embroiled in a smart city project covering a somewhat derelict (but highly valuable) area of the Toronto along the shores of Lake Ontario. 

Already in 2001, the Canadian metropolis set up the aptly named Waterfront Toronto (WT), a publicly-controlled corporation in charge of revitalizing the whole Lake Ontario waterfront along the city. WT then published early in 2017 a “Request for Proposals,” looking for an “investment and funding partner” for what would become known as the Quayside project. By the end of the year, the Alphabet subsidiary was chosen by WT.

It is important to note that this project was initially thought as a real estate one, and the desired innovation was to be found in building materials and carbon neutrality, while achieving certain goals in terms of social housing. There was no express desire for a model “smart city” of any sort, although the document does mention the usage of “smart technologies,” but always in the context of reducing building costs and improving the carbon footprint. 

Critics were quick to point out the puzzling choice; as innovative as it may be, Alphabet has no experience in real estate development. Rather, its core business is data processing and analytics, sometimes for research and often for advertisement purposes. What was meant to be a carbon-positive real-estate project seemed to be morphing into a hyper-connected (expensive) urban hub. 

And then came Sidewalk Labs’ detailed proposal. The visuals are neat; tellingly, there is not a single electronic device to be found in those pictures (is that one man on his cellphone?!) The words, however, tell another story. Carbon footprint and costs of building take a second seat to (personal) data processing: “Sidewalk expects Quayside to become the most measurable community in the world,” as stated in their winning proposal. One wonders whether the drafters of the proposal sincerely thought that, in this day an age, such a statement would fly with the public opinion. 

Critics of the project (who have since coalesced in the #BlockSidewalk movement) used the opportunity to dig deeper into WT itself, highlighting governance issues and the top-down character of the original Request for Proposals, beyond the plethora of data privacy questions (if not problems) the Sidewalk Labs proposal raised. In response, Sidewalk Labs deployed a vast campaign of public relations, whose success is far from guaranteed: they have “upgraded” their project, aiming for a bigger plot of land and even a new light rail plan (funded mostly on public money). At the time of this writing, WT has yet to make its final decision whether to retain the project of the Alphabet’s subsidiary. 

What lessons can we draw from this Toronto experience? “Smart city” projects are bound to become more commonplace, and while this one was not meant as such, some will be more straightforward in their aims. First, we should question the necessity of connecting every single thing and person. It matters to have in mind the social objectives of a given project, such as carbon footprint or building costs reduction. Collection of personal data can thus be articulated around and in function of those objectives, rather than as an end in itself. Connecting the park bench may be fancy, but for what purpose? More down to earth, the same question can be asked of street lights. 

As Christof Spieler reminds us in a recent tweet thread, certain municipal governments may be approached with “free” turnkey projects of connected infrastructure, in exchange (oh wait, it’s not free?) of both data and integration of the developer’s pre-existing systems into that infrastructure. Think of advertisements, and all the other possible monetization avenues… As Spieler points out, monetized smart infrastructure may come at a heavy social cost. 

Beyond that, one may wonder – who do we want as developers of such projects? Do we need the Sidewalk Labs of this world to realize the post-industrial heaven shown in the visuals of the Proposal? How will multinational data crunchers with an ominous track record make our cities smarter? The burden of proof is on them.

Anyone currently living in a large city or an American college town has had some experiences with scooters – would that be the mere annoyance of having them zip around on sidewalks. Or, as a friend of mine did, attempt to use one without checking first where the throttle is…

Montréal, the economic and cultural capital of Québec province in Canada, has recently given temporary “test” licenses to micromobility scooters and bikes operators Bird, Lime and Jump, the latter two being owned by Google and Uber, respectively. 

Operations started late spring, among some skepticism from Montrealers. Not only in face of the strict regulations imposed by the city’s bylaw, but also the steep price of the services. As one article from the leading French language daily La Presse compares, a ride that takes slightly more than 20 minutes by foot would cost more than 4 Canadian dollars (about $3) with either Lime (scooters) or Jump (bikes), for a total ride time of 12 minutes. The subway and the existing dock-based bike-share service (BIXI) are cheaper, if not both cheaper and quicker. 

While Montréal’s young and active population segment can be understood as the perfect customer base for micromobility, its local government, like many others across the world who face a similar scooter invasion, really mean it with tough regulation. Closer to home, Ann Arbor banned Bird, Lyft and Lime earlier this spring for failure to cooperate; Nashville mayor attempted a blanket ban; Boulder is considering lifting its ban; several Californian cities are enforcing a strict geofencing policy; further away from the US, Amsterdam is also going to put cameras in place in order to better enforce its bikes-first regulation after having already handed out 3500 (!) individual fines over the course of a few months. As NPR reports, the trend is toward further tightening of scooter regulations across the board.

So is Montréal’s story any different? Not really. It faces the same chaotic parking situation as everywhere else, with misplaced scooters, found outside of their geofence or simply where they should not be. In its bylaw providing for the current test licenses, the city council came up with a new acronym: the unpronounceable VNILSSA, or DSUV in English. The English version stands for “dockless self-serve unimmatriculated vehicles”. The bylaw sets a high standard for operators: they are responsible for the proper parking of their scooters at all times. Not only can scooters only be parked in designated (and physically marked) parking areas, but the operator has two hours to deal with a misplaced scooter after receiving a complaint from the municipal government, with up to ten hours when such a complaint is made by a customer outside of business hours. In addition, customers must be 18 to ride and must wear a helmet. 

Tough regulations are nice, but are they even enforced? The wear-a-helmet part of the bylaw is the police’s task to enforce and there has not been much going on that front so far. As for the other parts, the city had been playing it cool, so far, giving a chance to the operators to adjust themselves. But that did not suffice: the mayor’s team recently announced the start of fining season, targeting both customers who misplace their scooter or bike if caught red-handed and the operators in other situations. The mayor’s thinly veiled expression of dissatisfaction earlier prompted Lime to send an email to all its customers, asking them in turn to email the mayor’s office with a pre-formatted letter praising the micromobility service. The test run was meant to last until mid-November, but it looks like may end early… The mobility director of the mayor’s team pledged that most of the data regarding complaints and their handling – data which operators must keep – would be published on the city’s open data portal at the end of the test run. 

If Chris Schafer, an executive at Lime Canada, believes that customers still need to be “educated” to innovative micro-mobility, Montréal’s story may prove once more that micromobility operators also need to be educated, when it comes to respecting the rules and consumers’ taste for responsible corporate behavior.

I previously blogged on automated emergency braking (AEB) standardization taking place at the World Forum for Harmonization of Vehicle Regulations (also known as WP.29), a UN working group tasked with managing a few international conventions on the topic, including the 1958 Agreement on wheeled vehicles standards.

It turns out the World Forum recently published the result of a joint effort undertaken by the EU, US, China, and Japan regarding AV safety. Titled Revised Framework document on automated/autonomous vehicles, its purpose is to “provide guidance” regarding “key principles” of AV safety, in addition to setting the agenda for the various subcommittees of the Forum.

One may first wonder what China and the US are doing there, as they are not members to the 1958 Agreement. It turns out that participation in the World Forum is open to everyone (at the UN), regardless of membership in the Agreement. China and the US are thus given the opportunity to influence the adoption of one standard over the other through participation in the Forum and its sub-working groups, without being bound if the outcome is not to their liking in the end. Peachy!

International lawyers know that every word counts, and every word can be assumed to have been negotiated down to the comma, or so it is safe to assume. Using that kind of close textual analysis, what stands out in this otherwise terse UN prose? First, the only sentence couched in mandatory terms. Setting out the drafters’ “safety vision,” it goes as follows: AVs “shall not cause any non-tolerable risk, meaning . . . shall not cause any traffic accidents resulting in injury or death that are reasonably foreseeable and preventable.”

This sets the bar very high in terms of AV behavioral standard, markedly higher than for human drivers. We cause plenty of accidents which would be “reasonably foreseeable and preventable.” A large part of accidents are probably the result of human error, distraction, or recklessness, all things “foreseeable” and “preventable.” Nevertheless, we are allowed to drive and are insurable (except in the most egregious cases…) Whether this is a good standard for AVs can be discussed, but what is certain is that it reflects the general idea that we as humans hold machines to a much higher “standard of behavior” than other humans; we forgive other humans for their mistakes, but machines ought to be perfect – or almost so.

In second position: AVs “should ensure compliance with road traffic regulations.” This is striking by its simplicity, and I suppose that the whole discussion on how the law and its enforcement are actually rather flexible (such as the kind of discussion this very journal hosted last year in Ann Arbor) has not reached Geneva yet. As it can be seen in the report on this conference, one cannot just ask AVs to “comply” with the law; there is much more to it.

In third position: AV’s “should allow interaction with the other road users (e.g. by means of external human machine interface on operational status of the vehicle, etc.)” Hold on! Turns out this was a topic at last year’s Problem-Solving Initiative hosted by University of Michigan Law School, and we concluded that this was actually a bad idea. Why? First, people need to understand whatever “message” is sent by such an interface. Language may come in the way. Then, the word interaction suggests some form of control by the other road user. Think of a hand signal to get the right of way from an AV; living in a college town, it is not difficult to imagine how would such “responsive” AVs could wreak havoc in areas with plenty of “other road users,” on their feet or zipping around on scooters… Our conclusion was that the AV could send simple light signals to indicate its systems have “noticed” a crossing pedestrian for example, without any additional control mechanisms begin given to the pedestrian. Obviously, jaywalking in front on an AV would still result in the AV breaking… and maybe sending angry light signals or honking just like a human driver would do.

Finally: cybersecurity and system updates. Oof! Cybersecurity issues of IoT devices is an evergreen source of memes and mockery, windows to a quirky dystopian future where software updates (or lack thereof) would prevent one from turning the lights on, flushing the toilet, or getting out of the house… or where a botnet of connected wine bottles sends DDoS attacks across the web’s vast expanse. What about a software update while getting on a crowded highway from an entry ramp? In that regard, the language of those sections seems rather meek, simply quoting the need for respecting “established” cybersecurity “best practices” and ensuring system updates “in a safe and secured way…” I don’t know what cybersecurity best practices are, but looking at the constant stream of IT industry leaders caught in various cybersecurity scandals, I have some doubts. If there is one area where actual standards are badly needed, it is in consumer-facing connected objects.

All in all, is this just yet another useless piece of paper produced by an equally useless international organization? If one is looking for raw power, probably. But there is more to it: the interest of such a document is that it reflects the lowest common denominator among countries with diverging interests. The fact that they agree on something, (or maybe nothing) can be a vital piece of information. If I were an OEM or policy maker, it is certainly something I would be monitoring with due care.

Cite as: Raphael Beauregard-Lacroix, (Re)Writing the Rules of The Road: Reflections from the Journal of Law and Mobility’s 2019 Conference, 2019 J. L. & Mob. 97.

On March 15th, 2019, the Journal of Law and Mobility, part of the University of Michigan’s Law and Mobility Program, presented its inaugural conference, entitled “(Re)Writing the Rules of The Road.” The conference was focused on issues surrounding the relationship between automated vehicles (“AVs”) and the law. In the afternoon, two panels of experts from academia, government, industry, and civil society were brought together to discuss how traffic laws should apply to automated driving and the legal person (if any) who should be responsible for traffic law violations. The afternoon’s events occurred under a modified version of the Chatham House Rule, to allow the participants to speak more freely. In the interest of allowing those who did not attend to still benefit from the day’s discussion, the following document was prepared. This document is a summary of the two panels, and an effort has been made to de-identify the speaker while retaining the information conveyed. 

Panel I: Crossing the Double Yellow Line: Should Automated Vehicles Always Follow the Rules of the Road as Written?

The first panel focused on whether automated vehicles should be designed to strictly follow the rules of the road. Questions included – How should these vehicles reconcile conflicts between those rules? Are there meaningful differences between acts such as exceeding the posted speed limit to keep up with the flow of traffic, crossing a double yellow line to give more room to a bicyclist, or driving through a stop sign at the direction of a police officer? If flexibility and discretion are appropriate, how can this be reflected in law? 

Within the panel, there was an overall agreement that we need both flexibility in making the law, and flexibility in the law itself among the participants. It was agreed that rigidity, both on the side of the technology as well as on the side of norms, would not serve AVs well. The debate was focused over just how much flexibility there should be and how this flexibility can be formulated in the law.

One type of flexibility that already exists is legal standards. One participant emphasized that the law is not the monolith it may seem from the outside – following a single rule, like not crossing a double yellow line, is not the end of an individual’s interaction with the law. There are a host of different laws applying to different situations, and many of these laws are formulated as standards – for example, the standard that a person operating a vehicle drives with “due care and attention.” Such an approach to the law may change the reasoning of a judge when it would come to determining liability for an accident involving an AV. 

When we ask if AVs should always follow the law, our intuitive reaction is of course they should. Yet, some reflection may allow one to conclude that such strict programming might not be realistic. After all, human drivers routinely break the law. Moreover, most of the participants explicitly agreed that as humans, we get to choose to break the law, sometimes in a reasonable way, and we get to benefit from the discretion of law enforcement. 

That, however, does not necessarily translate to the world of AVs, where engineers make decisions about code and where enforcement can be automatized to a high degree, both ex ante and ex post. Moreover, such flexibilities in the law needs to be tailored to the specific social need; speeding is a “freedom” we enjoy with our own, personal legacy cars, and this type of law breaking does not fulfill the same social function as a driver being allowed to get on the sidewalk in order to avoid an accident. 

One participant suggested that in order to reduce frustrating interactions with AVs, and to overall foster greater safety, AVs need the flexibility not to follow the letter of the law in some situations. Looking to the specific example of the shuttles running on the University of Michigan’s North Campus – those vehicles are very strict in their compliance with the law. 1 1. Susan Carney, Mcity Driverless Shuttle launches on U-M’s North Campus, The Michigan Engineer (June 4, 2018), https://news.engin.umich.edu/2018/06/mcity-driverless-shuttle-launches-on-u-ms-north-campus/. × They travel slowly, to the extent that their behavior can annoy human drivers. When similar shuttles from the French company Navya were deployed in Las Vegas, 2 2. Paul Comfort, U.S. cities building on Las Vegas’ success with autonomous buses, Axios (Sept. 14, 2018), https://www.axios.com/us-cities-building-on-las-vegas-success-with-autonomous-buses-ce6b3d43-c5a3-4b39-a47b-2abde77eec4c.html. × there was an accident on the very first run. 3 3. Sean O’Kane, Self-driving shuttle crashed in Las Vegas because manual controls were locked away, The Verge (July 11, 2019, 5:32 PM), https://www.theverge.com/2019/7/11/20690793/self-driving-shuttle-crash-las-vegas-manual-controls-locked-away. × A car backed into the shuttle, and when a normal driver would have gotten out of the way, the shuttle did not.

One answer is that we will know it when we see it; or that solutions will emerge out of usage. However, many industry players do not favor such a risk-taking strategy. Indeed, it was argued that smaller players in the AV industry would not be able to keep up if those with deeper pockets decide to go the risky way. 

Another approach to the question is to ask what kind of goals should we be applying to AVs? A strict abidance to legal rules or mitigating harm? Maximizing safety? There are indications of some form of international consensus 4 4. UN resolution paves way for mass use of driverless cars, UN News (Oct. 10, 2018), https://news.un.org/en/story/2018/10/1022812. × (namely in the form of a UN Resolution) 5 5. UN Economic Commission for Europe, Revised draft resolution on the deployment of highly and fully automated vehicles in road traffic (July, 12, 2018), https://www.unece.org/fileadmin/DAM/trans/doc/2018/wp1/ECE-TRANS-WP.1-2018-4-Rev_2e.pdf × that the goal should not be strict abidance to the law, and that other road users may commit errors, which would then put the AV into a situation of deciding between strict legality and safety or harm. 

In Singapore, the government recently published “Technical Reference 68,” 6 6. Joint Media Release, Land Transport Authority, Enterprise Singapore, Standards Development Organization, & Singapore Standards Council, Singapore Develops Provisional National Standards to Guide Development of Fully Autonomous Vehicles (Jan. 31, 2019), https://www.lta.gov.sg/apps/news/page.aspx?c=2&id=8ea02b69-4505-45ff-8dca-7b094a7954f9. × which sets up a hierarchy of rules, such as safety, traffic flow, and with the general principle of minimizing rule breaking. This example shows that principles can act as a sense-check. That being said, the technical question of how to “code” the flexibility of a standard into AV software was not entirely answered. 

Some participants also reminded the audience that human drivers do not have to “declare their intentions” before breaking the law, while AV software developers would have to. Should they be punished for that in advance? Moreover, non-compliance with the law – such as municipal ordinances on parking – is the daily routine for certain business models such as those who rely on delivery. Yet, there is no widespread condemnation of that, and most of us enjoy having consumer goods delivered at home.

More generally, as one participant asked, if a person can reasonably decide to break the law as a driver, does that mean the developer or programmer of AV software can decide to break the law in a similar way and face liability later? Perhaps the answer is to turn the question around – change the law to better reflect the driving environment so AVs don’t have to be programmed to break it. 

Beyond flexibility, participants discussed how having multiple motor vehicle codes – in effect one per US State – makes toeing the line of the law difficult. One participant highlighted that having the software of an AV validated by one state is big enough a hurdle, and that more than a handful of such validations processes would be completely unreasonable for an AV developer. Having a single standard was identified as a positive step, while some conceded that states also serve the useful purpose of “incubating” various legal formulations and strategies, allowing in due time the federal government to “pick” the best one. 

Panel II: Who Gets the Ticket? Who or What is the Legal Driver, and How Should Law Be Enforced Against Them?

The second panel looked at who or what should decide whether an automated vehicle should violate a traffic law, and who or what should be responsible for that violation. Further questions included – Are there meaningful differences among laws about driving behavior, laws about vehicle maintenance, and laws and post-crash responsibilities? How should these laws be enforced? What are the respective roles for local, state, and national authorities?

The participants discussed several initiatives, both public and private, that aimed at defining, or helping define the notion of driver in the context of AVs. The Uniform Law Commission worked on the “ADP”, or “automated driving provider”, which would replace the human driver as the entity responsible in case of an accident. The latest report from the RAND Corporation highlighted that the ownership model of AVs will be different, as whole fleets will be owned and maintained by OEMs (“original equipment manufacturers”) or other types of businesses and that most likely these fleet operators would be the drivers. 7 7. James M. Anderson, et. al., Rethinking Insurance and Liability in the Transformative Age of Autonomous Vehicles (2018), https://www.rand.org/content/dam/rand/pubs/conf_proceedings/CF300/CF383/RAND_CF383.pdf. ×

Insurance was also identified as a matter to take into consideration in the shaping up of the notion of AV driver. As of the date of the conference, AVs are only insured outside of state-sponsored guarantee funds, which aim to cover policy holders in case of bankruptcy of the insurer. Such “non-admitted” insurance means that most insurers will simply refuse to insure AVs. Who gets to be the driver in the end may have repercussions on whether AVs become insurable or not. 

In addition, certain participants stressed the importance of having legally recognizable persons bear the responsibility – the idea that “software” may be held liable was largely rejected by the audience. There should also be only one such person, not several, if one wants to make it manageable from the perspective of the states’ motor vehicle codes. In addition, from a more purposive perspective, one would want the person liable for the “conduct” of the car to be able to effectuate required changes so to minimize the liability, through technical improvements for example. That being said, such persons will only accept to shoulder liability if costs can be reasonably estimated. It was recognized by participants that humans tend to trust other humans more than machines or software, and are more likely to “forgive” humans for their mistakes, or trust persons who, objectively speaking, should not be trusted.

Another way forward identified by participants is product liability law, whereby AVs would be understood as a consumer good like any other. The question then becomes one of apportionment of liability, which may be rather complex, as the experience of the Navya shuttle crash in Las Vegas has shown. 

Conclusion

The key takeaway from the two panels is that AV technology now stands at a crossroads, with key decisions being taken as we discuss by large industry players, national governments and industry bodies. As these decisions will have an impact down the road, all participants and panelists agreed that the “go fast and break things” approach will not lead to optimal outcomes. Specifically, one line of force that comes out from the two panels is the idea that it is humans who stand behind the technology, humans who take the key decisions, and also humans who will accept or reject commercially-deployed AVs, as passengers and road users. As humans, we live our daily lives, which for most of us include using roads under various capacities, in a densely codified environment. However, this code, unlike computer code, is in part unwritten, flexible and subject to contextualization. Moreover, we sometimes forgive each others’ mistakes. We often think of the technical challenges of AVs in terms of sensors, cameras and machine learning. Yet, the greatest technical challenge of all may be to express all the flexibility of our social and legal rules into unforgivably rigid programming language. 

“Safety.” A single word that goes hand-in-hand (and rhymes!) with CAV. If much has been said and written about CAV safety already (including on this very blog, here and there,) two things are certain: while human drivers seem relatively safe – when considering the number of fatalities per mile driven – there are still too many accidents, and increasingly more of them. 

The traditional approach to safely deploying CAVs has been to make them drive, drive so many miles, and with so few accidents and “disengagements,” that the regulator (and the public) would consider them safe enough. Or even safer than us!  

Is that the right way? One can question where CAVs are being driven. If all animals were once equal, not every mile can be equally driven. All drivers know that a mile on a straight, well-maintained road by a fine sunny day is not the same as a mile drive on the proverbially mediocre Michigan roads during a bout of freezing rain. The economics are clear; the investments in AV technology will only turn a profit through mass deployment. Running a few demos and prototypes in Las Vegas won’t cut it; CAVs need to be ready to tackle the diversity of weather patterns we find throughout the world beyond the confines of the US South-West.

Beyond the location, there is the additional question of whether such “testing” method is the right one in the first place. Many are challenging what appears to be the dominant approach, most recently during this summer’s Automated Vehicle Symposium. Their suggestion: proper comparison and concrete test scenarios. For example, rather than simply aiming for the least amount of accidents per 1000’s of miles driven, one can measure break speed at 35mph, in low-visibility and wet conditions, when a pedestrian appears 10 yards in front of the vehicle. In such a scenario, human drivers can meaningfully be compared to software ones. Furthermore, on that basis, all industry players could come together to develop a safety checklist which any CAV must be able to pass before hitting the road. 

Developing a coherent (and standardized?) approach to safety testing should be at the top of the agenda, with a looming push in Congress to get the AV bill rolling. While there are indications that the industry might not be expecting much from the federal government, this bill still has the possibility of allowing CAVs on the road without standardized safety tests, which could result in dire consequences for the industry and its risk-seeking members. Not to mention that a high-risk business environment squeezes out players with shallower pockets (and possibly innovation) and puts all road users, especially those without the benefit of a metal rig around them, at physical and financial risk were an accident to materialize. Signs of moderation, such as Cruise postponing the launch of its flagship product, allows one to be cautiously hopeful that “go fast and break things” mentality will not take hold in the automated driving industry.

*Correction 9/9/19 – A correction was made regarding the membership to 1958 Agreement and participation at the World Forum.

A European Commission plan to implement the connected car-specific 802.11p “Wi-Fi” standard for vehicle-to-vehicle (V2V) communication was scrapped early July after a committee of the Council of the European Union (which formally represents individual member states’ during the legislative process) rejected it. The standard, also known as ITS-G5 in the EU, operates in the same frequency range as domestic Wi-Fi, now most often deployed under the 802.11n specification.

The reason for this rejection were made clear by the opponents of “Wi-Fi V2V”: telecommunication operators, and consortia of IT equipment and car manufacturers (such as BMW and Qualcomm) would never allow locking out 5G and its ultra-low latency, “vehicle-to-everything” (V2X) solutions. In turn, countries with substantial industrial interest in those sectors (Germany and Finland, to name only two,) opposed the Commission plan.

Yet it appears that Commissioner Bulc had convincing arguments in favor of 802.11p. In her letter to the European Parliament’s members, she stresses that the technology is available now, and can be successfully and quickly implemented, for immediate improvements in road safety. In her view, failure to standardize now means that widespread V2V communication will not happen until the “5G solutions” come around.

5G is a polarizing issue, and information about it is often tainted with various industries’ talking points. It first matters to differentiate 5G as the follow-up on 4G, and 5G as the whole-new-thing-everyone-keeps-talking-about. As the follow up on 4G, 5G is the technology that underpins data delivery to individual cellphones. It operates mostly in higher frequencies than current 4G, higher frequencies which have a lower range and thus require more antennas. That in turn explains why most current cellphone 5G deployments are concentrated in large cities.

The “other” 5G is based on a promise: the higher the frequency, the higher the bandwidth and the lower the latency. Going into the hundreds of GHz, 5G theoretically delivers large bandwidth (in the range of 10 Gbps) in less than 1ms, with the major downside of a proportionally reduced range and ability to penetrate dense materials.

The logical conclusions of these technical limitations is that the high-bandwidth, low-latency 5G, set to revolutionize the “smart”-everything and that managed to gather some excitement will become a reality the day our cities are literally covered with antennas at every street corner, on every lamppost and stop sign. Feasible over decades in cities (with whose money, though?), a V2X world based on a dense mesh of antennas looks wholly unrealistic in lower density areas.

Why does it make sense, then, to kick out a simple, cheap and patent-free solution to V2V communication in favor of a costly and hypothetical V2X?

Follow the money, one would have said: what is key in this debate is understanding the basic economics of 5G. As the deployment goes on, it is those who hold the “Standard Essential Patents” (SEPs) who stand to profit the most. As reported by Nikkei in May 2019, China leads the march with more than a third of SEPs, followed by South Korea, the US, Finland, Sweden and Japan.

If the seat of the V2V standard is already taken by Wi-Fi, that is one less market to recoup the costs of 5G development. It thus does not come as a surprise that Finland was one of the most vocal opponents to the adoption of 802.11p, despite having no car industry – its telecom and IT sector have invested heavily in 5G and are visibly poised to reap the rewards.

Reasonable engineers may disagree on the merits of 802.11p – as the United States’ own experience with DSRC, based on that same standard, shows. Yet, the V2X 5G solutions are nowhere to be seen now, and investing in such solutions was and remains to this day a risky enterprise. Investments required are huge, and one can predict there will be some public money involved at some point to deploy all that infrastructure.

“The automotive industry is now free to choose the best technology to protect road users and drivers” said Lise Fuhr, director general of the European Telecommunications Network Operators’ Association (ETNO) after their win at the EU Council. I would rather say: free to choose the technology that will preserve telcos’ and some automakers’ risky business model. In the meantime, European citizens and taxpayers subsidize that “freedom” with more car accidents and fatalities, not to speak of other monetary costs 5G brings about. The seat will have been kept warm until the day their 5G arrives – if it does – at some point between 2020 and 2025. In the meantime, users will have to satisfy ourselves of with collision radars, parking cameras, cruise control and our good ol’ human senses.

Many have claimed that EU’s General Data Protection Regulation (GDPR) would “kill AI”. Shortly after its entry into force at the end of May 2018, the New York Times was already carrying industry concerns: “the new European data privacy legislation is so stringent that it could kill off data-driven online services and chill innovations like driverless cars, tech industry groups warn.” Following that train of thought, news outlets, general and specialized alike, have since then piled up on how such regulations on “data” would generally be harmful to innovation.

To be sure, other voices make themselves heard too. When trust in a technology is at stake, heralds of that technology understand that appearing to embrace regulation is a good PR move. Yet, beyond what could be seen as a cynical attitude, there are the pragmatists too. For them, regulation is a given, and with the right mindset, it can be transformed into an advantage.

This is such a mindset one could expect for European Union institutions. Speaking at a tech conference in Slovenia last April, EU Commissioner for Transport Violeta Bulc painted a rosy future for European transportation. Not only is Europe ready for automation, but it is embracing it. Already, car manufacturers must integrate certain automation components to all their new cars, such as lane assistance, distraction sensors and a black box used to “determine the cause of accidents.” And then not only cars, but ships, planes, trains, even drones are part of the EU’s vision for an integrated transportation system, as part of the “mobility as a service,” or MaaS vision. To support that MaaS (all-electric and paperless,) a “European GPS,” Galileo, and widespread 5G deployment, with even a priority on rural areas!

Is this all fluff? Far from seeking refuge from overbearing European red tape, most European AI and automation leaders see themselves in a “tortoise and the hare” paradigm: let the US innovators go fast and break things; we’ll take steady measured steps forward, but we’ll get there, and maybe even before the US. This is what a recent Bloomberg feature article on the booming European automation scene. Concretely, what are these steps? As far as AVs go, the first and main one is shared data sharing. Intense AV testing might be Arizona’s and California’s go-to model. But what is the use case for Waymo’s car beyond the dry, wide, and dunny streets of Phoenix? What about dense urban environments with narrow streets, like in Europe? Or snowy, low-density countryside roads, of which there are plenty in the US during the winter months? Safety in mass deployment will come from the capacity to aggregate everyone’s data, not just your own.

The most surprising part is that this push to open the “walled gardens” of the large OEMs does not even come from the government, but from tech firms. One of them, Austrian, is working an open AV operating system, with the intention to keep safety at the core of its business philosophy. As its founder told Bloomberg, “open to information sharing” is a requirement for safety. With such an angle, one is not surprised to read that the main challenge the company faces is the standardization of data flows; a tough challenge. But isn’t what innovation is about?

While the clever scientists won’t give the press all their tricks, many appear confident, stating simply that working with such regulations simply requires a “different approach.”

With roughly a clip a month – most of these being corporate fluff – Waymo’s YouTube channel is not the most exciting nor informative one. At least, those (like me) who keep looking for clues about Waymo’s whereabouts should not expect anything to come out of there.

That was until February 20th, when Waymo low-key published a 15 second clip of their car in action – the main screen showing a rendering of what the car “sees” and the corner thumbnail showing the view from the dash cam. The key point: Waymo’s car apparently crosses a broken-lights, police-controlled intersection without any hurdle. Amazing! Should we conclude that level 5 is at our very doorsteps?

The car and tech press was quick to spot this one, and reports were mostly praise. Yet Brad Templeton, in his piece for Forbes pinpoints at a few things that the clip does not say. First, we have the fact that Waymo operates in a geographically-enclosed area, where the streets, sidewalk and other hard infrastructure (lights, signs, and probably lines) are pre-mapped and already loaded in the algorithm. In other words, Waymo’s car does not discover stuff as it cruises along the streets of Northern California. Moreover, the street lights here do not work and so technically, this is just another four-way stop-signed intersection, with the difference that it is rather busy and there is a traffic police directing traffic in the middle. Finally, the car just goes straight, which is by far the easiest option (no left turn, for example…)

Beyond that, what Waymo alleges and wants us to see, is that car “recognizes” the policeman, or at the very least, recognizes that there is something person-shaped standing in the middle of the intersection and making certain gestures at the car, and that the car’s sensors and Waymo’s algorithms are now at the level of being able to understand hand signals of law enforcement officers.

Now I heard, less than a year ago, the CEO of a major player in the industry assert that such a thing was impossible – in reference to CAVs being able to detect and correctly interpret hand signals cyclists sometime use. It seems that a few months later, we’re there. Or are we? One issue which flew more or less under the radar, is how exactly does the car recognize the LEO here? Would a random passerby playing traffic cop have the same effect? If so, is that what we want?

As a member of the “Connected and Automated Vehicles: Preparing for a Mixed Fleet Future” Problem Solving Initiative class held at the University of Michigan Law School last semester, my team and I have had the opportunity to think about just that – how to make sure that road interactions stay as close as possible as they are today – and conversely how to foreclose awkward interactions or possible abuses that “new ways to communicate” would add. Should a simple hand motion be able to “command” a CAV? While such a question cuts across many domains, our perspective was a mostly legal one and our conclusion was that any new signal that CAV technology enables (from the perspective of pedestrians and other road users) should be non-mandatory and limited to enabling mutual understanding of intentions without affecting the behavior of the CAV. Now what we see in this video is the opposite; seemingly, the traffic police person is not equipped with special beacons that broadcast some form of “law enforcement” signal, and it is implied – although, unconfirmed – that there is no human intervention. We are left awed, maybe, but reassured? Maybe not.

The takeaway may be just this: the issues raised by this video are real ones, and are issues Waymo, and others, will at some point have to address publicly. Secrecy may be good for business, but only so much. Engagement by key industry players is of the highest importance, if we want to foster trust and avoid having the CAV technology crash land in our societies.

The “Trolley Problem” has been buzzing around for a while now, so much that it became the subject of large empirical studies which aimed at finding a solution to it that be as close to “our values” as possible, as more casually the subject of an episode of The Good Place.

Could it be, however, that the trolley problem isn’t one? In a recent article, the EU Observer, an investigative not-for-profit outlet based in Brussels, slashed at the European Commission for its “tunnel vision” with regards to CAVs and how it seems to embrace the benefits of this technological and social change without an ounce of doubt or skepticism. While there are certainly things to be worried about when it comes to CAV deployment (see previous posts from this very blog by fellow bloggers here and here) the famed trolley might not be one of those.

The trolley problem seeks to illustrate one of the choices that a self-driving algorithm must – allegedly – make. Faced with a situation where the only alternative to kill is to kill, the trolley problem asks the question of who is to be killed: the young? The old? The pedestrian? The foreigner? Those who put forward the trolley problem usually do so in order to show that as humans, we are forced with morally untenable alternative when coding algorithms, like deciding who is to be saved in an unavoidable crash.

The trolley problem is not a problem, however, because it makes a number of assumptions – too many. The result is a hypothetical scenario which is simple, almost elegant, but mostly blatantly wrong. One such assumption is the rails. Not necessarily the physical ones, like those of actual trolleys, but the ones on which the whole problem is cast. CAVs are not on rails, in any sense of the word, and their algorithms will include the opportunity to go “off-rails” when needed – like get on the shoulder or on the sidewalk. The rules of the road incorporate a certain amount of flexibility already, and such flexibilities will be built in the algorithm.

Moreover, the very purpose of the constant sensor input processed by the driving algorithm is precisely to avoid putting the CAV in such a situation where the only options that remain are collision or collision.

But what if? What if a collision is truly unavoidable? Even then, it is highly misleading to portray CAV algorithm design as a job where one has to incorporate a piece of code specific to every single decision to be made in the course of driving. The CAV will never be faced with an input of the type we all-too-often present the trolley problem: go left and kill this old woman, go right and kill this baby. The driving algorithm will certainly not understand the situation as one where it would kill someone; it may understand that a collision is imminent and that multiple paths are closed. What would it do, then? Break, I guess, and steer to try to avoid a collision, like the rest of us would do.

Maybe what the trolley problem truly reveals is the idea that we are uneasy with automated cars causing accidents – that is, they being machines, we are much more comfortable with the idea that they will be perfect and will be coded so that no accident may ever happen. If, as a first milestone, CAVs are as safe as human drivers, that would certainly be a great scientific achievement. I do recognize however that it might not be enough for the public perception, but that speaks more of our relationship to machines than to any truth behind the murderous trolley. All in all, it is unfortunate that such a problem continues to keep brains busy while there are more tangible problems (such as what to do with all those batteries) which deserve research, media attention and political action.

The European Parliament, the deliberative institution of the European Union which also acts as a legislator in certain circumstances, approved on February 20, 2019 the European Commission’s proposal for a new Regulation on motor vehicle safety. The proposal is now set to move to the next step of the EU legislative process; once enacted, an EU Regulation is directly applicable in the law of the 28 (soon to be 27) member states.

This regulation is noteworthy as it means to pave the way for Level 3 and Level 4 vehicles, by obligating car makers to integrate certain “advanced safety features” in their new cars, such as driver attention warnings, emergency braking and a lane-departure warning system. If many of us are familiar with such features which are already found in many recent cars, one may wonder how this would facilitate the deployment of Level 3 or even Level 4 cars. The intention of the European legislator is not outright obvious, but a more careful reading of the legislative proposal reveals that the aim goes much beyond the safety features themselves: “mandating advanced safety features for vehicles . . .  will help the drivers to gradually get accustomed to the new features and will enhance public trust and acceptance in the transition toward autonomous driving.” Looking further at the proposal reveals that another concern is the changing mobility landscape in general, with “more cyclists and pedestrians [and] an aging society.” Against this backdrop, there is a perceived need for legislation, as road safety metrics have at best stalled, and are even on the decline in certain parts of Europe.

In addition, Advanced Emergency Braking (AEB) systems have been trending at the transnational level, in these early months on 2019. The World Forum for Harmonization of Vehicle Regulations (known as WP.29) has recently put forward a draft resolution on such systems, in view of standardizing them and making them mandatory for the WP.29 members, which includes most Eurasian countries, along with a handful of Asia-Pacific and African countries. While the World Forum is hosted by the United Nations Economic Commission for Europe (UNECE,) a regional commission of the Economic and Social Council (ECOSOC) of the UN, it notably does not include among its members certain UNECE member states such as the United States or Canada, which have so far refused to partake in World Forum. To be sure, the North American absence (along with that of China and India, for example) is not new; they have never partaken in the World Forum’s work since it started its operations in 1958. If the small yellow front corner lights one sees on US cars is not something you will ever see on any car circulating on the roads of a W.29 member state, one may wonder if the level of complexity involved in designing CAV systems will not forcibly push OEMs toward harmonization; it is one thing to live with having to manufacture different types of traffic lights, and it is another one to design and manufacture different CAV systems for different parts of the world.

Yet it is well known that certain North American regulators are not a big fan of such approach. In 2016, the US DoT proudly announced an industry commitment of almost all car makers to implement AEB systems in their cars, with the only requirement that such systems satisfy set safety objectives. If it seems like everyone would agree that limited aims are sometimes the best way to get closer to the ultimate, bigger goal, the regulating style varies. In the end, one must face the fact that by 2020, AEB systems will be harmonized for a substantial part of the global car market, and maybe, will be so in a de facto manner even in North America. And given that the World Forum has received a received a clear mandate from the EU – renewed as recently as May 2018 – to develop a global and comprehensive CAV standard, North American and other Asian governments who have so far declined to join the W.29 might only lose an opportunity to influence the outcome of such CAV standards by sticking to their guns.

The global automotive industry – and the world of global corporations – was shaken when Carlos Ghosn, Renault-Nissan-Mitsubishi’s (“RNM”) CEO, was arrested by Japanese authorities for alleged multiple counts of financial misconduct at the end of November 2018. For those who had been following developments inside the RNM “alliance,” this apparently sudden crackdown came as no surprise. Irrespective of the substance of the claims against Ghosn (and it is reasonable to believe that they are at least in part substantiated) the story of Ghosn downfall is a long one, told in long form in a recent Bloomberg piece.

One part of that story is the rise of Nissan, early on relegated to second fiddle in the Alliance’s grand scheme of things, and the relative stagnation of Renault since. If the former needed rescue at the time of the setup of the Alliance in 1999, facts on the ground have changed: Nissan’s market homerun with its all-electric, consumer-accessible Leaf, secured the Japanese car-maker a comfortable position. To say the least, these facts have not always been quite reflected in the corporate structure and decision-making practices at the Alliance level. Increasingly, the overbearing role of the French state, the largest (by a hair’s width) shareholder of Renault came about as an irritant to the Japanese partner. As reported by the French investigative weekly Le Canard Enchaîné of November 28, 2018,Ghosn was the keystone of the formal – and informal – corporate governance entente throughout the Alliance itself, and both Renault and Nissan individually, with executive or board positions in all three entities.Where Ghosn once stood alone now stands three different persons and the Japanese car-maker’s economic domination over its former French rescuer is just made more apparent. While French media were quick to point out that the nomination of Jean-Dominique Senard to be the head of Renault (and eventually Nissan, and eventually the Alliance itself) would bring everything back to normal, the Financial Times reported on February 19, 2019 that Nissan would oppose his nomination as CEO of the Japanese carmaker, thereby disavowing the old governance model.

In parallel to this corporate drama, as of February 2019, the Alliance is allegedly negotiating a deal with Alphabet’s Waymo that would have them build “robotaxis” and develop the spanning software infrastructure that mandatorily comes with such a project. Now, most of the press seems to think that a Waymo deal would bring some energy to revive the alliance after a hard hit. The revelations about the deal came with the usual disclaimers: an Alliance spokesman termed all this mere “speculation” and Waymo, well, Waymo draped itself in its usual ominous silence.

Could Waymo end up changing its mind about the whole thing following the deepening crisis rocking the Alliance? Us mere ill-informed mortals can only speculate about what Waymo does or what Waymo wants, and if a mature deal is not something to be ditched on a whim, it might make sense to keep the whole thing closer to Nissan than to Renault, and most of all steer clear from Eurasian transnational corporate politics. Rather than reviving the Alliance, the Waymo deal might just be an opportunity to ditch it. Interestingly enough, the original February 5th, 2019 report by Nikkei on the deal clearly states that it would involve the deployment of mobility-as-a-service (MaaS) infrastructure in Japan with cars made by Nissan (and maybe, made in Japan too?) Moreover, Renault’s stance in the CAV is not quite clear. It’s much hyped (at least judging by the awed French journalists) Symbioz – a level 3 CAV – is now nowhere to be seen. If I were Nissan and Waymo, I might just be going non merci on Renault for this time around.