Cite as: Bryant Walker Smith, How Reporters Can Evaluate Automated Driving Announcements, 2020 J. L. & MOB. 1.

This article identifies a series of specific questions that reporters can ask about claims made by developers of automated motor vehicles (“AVs”). 1 1. These questions first appeared in Questions to Ask About AV Announcements, Law of the Newly Possible, _ to_ Ask_About_AV_Announcements (last updated Oct. 14, 2019). This article updates and explains them. × Its immediate intent is to facilitate more critical, credible, and ultimately constructive reporting on progress toward automated driving. In turn, reporting of this kind advances three additional goals. First, it encourages AV developers to qualify and support their public claims. Second, it appropriately manages public expectations about these vehicles. Third, it fosters more technical accuracy and technological circumspection in legal and policy scholarship.

This third purpose goes to the core of this interdisciplinary journal. Legal and policy scholarship about emerging technologies often relies at least in part on popular reporting. On one hand, this reporting can provide timely and accessible insights into these technologies, particularly when the scientific literature cannot. On the other hand, this reporting can reflect misconceptions based on incomplete information supplied by self-interested developers—misconceptions that are then entrenched through legal citation. For example, I have pushed back against claims that automated driving will be a panacea, 2 2. See Bryant Walker Smith, How Governments Can Promote Automated Driving, 47 N.M. L. Rev. 99 (2017); Bryant Walker Smith, Managing Autonomous Transportation Demand, 52 Santa Clara L. Rev. 1401 (2012). × that its technical challenges have long been “solved,” 3 3. See Bryant Walker Smith, Automated Driving and Product Liability, 2017 Mich. St. L. Rev. 1, (2017); Bryant Walker Smith, A Legal Perspective on Three Misconceptions in Vehicle Automation, in Lecture Notes In Mobility: Road Vehicle Automation 85 (Gereon Meyer & Sven Beiker eds., 2014). × and that nontechnical issues involving regulation, liability, popularity, and philosophy are therefore the paramount obstacles to deployment. 4 4. See Bryant Walker Smith, Automated Vehicles Are Probably Legal in the United States, 1 Tex. A&M L. Rev. 411 (2014); Bryant Walker Smith, supra note 3 (discussing product liability); Bryant Walker Smith, The Trolley and the Pinto: Cost-Benefit Analysis in Automated Driving and Other Cyber-Physical Systems, 4 Tex. A&M L. Rev. 197 (2017). ×

Common to many of these misconceptions is the question of whether automated driving is finally here. AVs were 20 years away from the late 1930s until the early 2010s and have been about five years away ever since. This is clearly a long history of misplaced optimism, but more recent predictions, while still moving targets, are now proximate enough to realistically drive decisions about investment, planning, and production. Indeed, of the companies that claim to be even closer, some really are—at least to automated driving of some kind.

The “what” of these predictions matters as much as the “when,” and the leading definitions document for automated driving—SAE J3016—is helpful for understanding this what. 5 5. SAE INT’L, J3016, TAXONOMY AND DEFINITIONS FOR TERMS RELATED TO DRIVING AUTOMATION SYSTEMS FOR ON-ROAD MOTOR VEHICLES (last updated June 15, 2018), [hereinafter SAE J3016]. The term “automated vehicle” deviates slightly from SAE J3016 but is nonetheless widely accepted. See, e.g., U.N. ECON. COMM’N FOR EUR., RESOLUTION ON THE DEPLOYMENT OF HIGHLY AND FULLY AUTOMATED VEHICLES IN ROAD TRAFFIC (Oct. 2019),; U. S. DEP’T OF TRANSP., USDOT AUTOMATED VEHICLES ACTIVITIES (last updated Feb. 7, 2020),; Final Act, With Comments: Uniform Automated Operation of Vehicles Act (2019), /HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=a78d1ab0-fac8-9ea1-d8f2-a77612050e6e&forceDialog=0. However, the levels of automation generally describe features on vehicles rather than the vehicles themselves. See SAE J3016. ×  The figure below offers a gloss on these definitions, including the widely (mis)referenced levels of driving automation. No developer has credibly promised level 5 (full automation) anytime soon. But many are working toward various applications of level 4 (high automation), which could, depending on their implementation, include everything from low-speed shuttles and delivery robots to traffic jam automation features and automated long-haul trucks. When anything approaching level 5 does becomes a reality, it will likely be an afterthought in a world that has already been revolutionized in a hundred other ways.

Figure: A Gloss on SAE J3016 6 6. This first appeared at Automated Driving Definitions, LAW OF THE NEWLY POSSIBLE, ns (last updated Aug. 1, 2018). ×

Your role in driving automation

Driving involves paying attention to the vehicle, the road, and the environment so that you can steer, brake, and accelerate as needed. If you’re expected to pay attention, you’re still driving — even when a vehicle feature is assisting you with steering, braking, and/or accelerating. (Driving may have an even broader legal meaning.)

Types of trips

  • You must drive for the entire trip
  • You will need to drive if prompted in order to maintain safety
  • You will need to drive if prompted in order to reach your destination
  • You will not need to drive for any reason, but you may drive if you want
  • You will not need to drive for any reason, and you may not drive

Types of vehicles

  • Vehicles you can drive
  • Vehicles you can’t drive

Types of vehicle features

These are the levels of driving automation. They describe features in vehicles rather than the vehicles themselves. This is because a vehicle’s feature or features may not always be engaged or even available.

The operational design domain (“ODD”) describes when and where a feature is specifically designed to function. For example, one feature may be designed for freeway traffic jams, while another may be designed for a particular neighborhood in good weather.

By describing a feature’s level of automation and operational design domain, the feature’s developer makes a promise to the public about that feature’s capabilities.

Assisted driving features

  • L0: You’re driving
  • L1: You’re driving, but you’re assisted with either steering or speed
  • L2: You’re driving, but you’re assisted with both steering and speed

Automated driving features

  • L3: You’re not driving, but you will need to drive if prompted in order to maintain safety
  • L4: You’re not driving, but either a) you will need to drive if prompted in order to reach your destination (in a vehicle you can drive) or b) you will not be able to reach every destination (in a vehicle you can’t drive)
  • L5: You’re not driving, and you can reach any destination

As the following questions for reporters make clear, automated driving is much more than just a level of automation. The questions, which fall into five overlapping categories (human monitoring, technical definitions, deployment, safety, and reevaluation), are:

1. Human monitoring

1.1. Is a person monitoring the AV from inside the vehicle? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

1.2. Is a person monitoring the AV from outside the vehicle? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

1.3. Is a person monitoring the AV from a remote center? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

1.4. What are specific examples of difficult scenarios in which a person did not intervene? In which a person unnecessarily intervened? In which a person necessarily intervened? What form did this intervention take?

1.5. At any moment, what is the ratio between the number of people who are monitoring and the number of AVs that are deployed?

2. Technical definitions

2.1. What level of automation corresponds to the design intent for the AV? What level of automation corresponds to how the AV is actually being operated?

2.2. In what environment is the AV operating? On roads open to other motor vehicles? To bicyclists? To pedestrians?

2.3. What infrastructure, if any, has been changed or added to support the AV in this environment?

2.4. If the AV perceives that its path is obstructed, what does it do? For example, does it wait for the obstruction to clear, wait for a person to intervene, or plan and follow a new path?

3. Deployment

3.1. What is the AV’s deployment timeline? For how long will it be deployed? Is this a temporary or permanent service?

3.2. Who can buy the AV or its automated driving feature? Under what conditions?

3.3. Who can ride in, receive products or services from, or otherwise use the AV? Under what conditions?

3.4. As part of the deployment, who is paying whom? For what?

3.5. What promises or commitments has the developer of the AV made to governments and other project partners?

3.6. What previous promises, commitments, and announcements has the developer made about their AVs? Have they met them? Do they still stand by them? What has changed, and what have they learned? Why should we believe them now?

4. Safety

4.1. Why do the developer of the AV and any companies or governments involved in its deployment think that the deployment is reasonably safe? Why should we believe them?

4.2. What will the developer of the AV and any companies or governments involved in its deployment do in the event of a crash or other incident?

5. Reevaluation

5.1. Might the answers to any of these questions change during the deployment of the AV? How and why? What will trigger that change?

The remainder of this article explores these questions with a view toward assessing the reality behind a given automated driving announcement or activity. To this end, it is important to understand that a vehicle that requires an attentive safety driver is not truly an automated vehicle. Aspirational, yes. But actual, no. This point underlies many of the questions that follow.

Human Monitoring

Is a person monitoring the AV from inside the vehicle? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

Imagine that as you are boarding a plane, the captain announces that “I’ll be using autopilot today. We’ll be pushing off shortly. Have a nice flight.” How do you feel?

Now imagine that the captain instead announces that “You’ll be using autopilot today, because I’m getting off. You’ll be pushing off shortly. Have a nice flight.” How do you feel now?

Just as there is a significant difference between these two scenarios, automated driving under the supervision of a safety driver is not the same as automated driving without this supervision. Yet news headlines, ledes, and even entire articles often describe only “driverless” vehicles—even when those vehicles are supervised by at least one trained safety driver who is physically present for every trip.

This confusion has consequences. Casual readers (and even reporters) may believe that an automated driving project is far more technically advanced or economically feasible than it really is. They may therefore be more likely to look for nontechnical explanations for the seemingly slow rollout of automated vehicles. Ironically, they may also discount truly significant news, such as Waymo’s recent decision to remove safety drivers from some of its vehicles. 7 7. Dan Chu, Waymo One: A year of firsts, WAYMO, (Dec. 5, 2019), ×

Reporters should therefore ask whether an automated vehicle is being operated with or without a safety driver inside it, and they should include the answer to this question in the first rather than the final paragraph of their stories. Related questions can then provide further context. Is the safety driver seated in the traditional driver’s seat (if there is one) or elsewhere in the vehicle? Can they immediately brake, steer, and accelerate the vehicle? And, in the interest of safety, how are they supervised? As Uber’s 2018 fatal crash tragically demonstrated, a system’s machine and human elements can both be fallible. 8 8. In short: Both the design and the driver were lax on the assumption that the other would not be. Cf. NAT’L TRANSP. SAFTEY BD., NTSB – ADOPTED BOARD REPORT HAR-19/03 (Dec. 12, 2019), 9021&docketID=62978&mkey=96894 (describing the factors that contributed to the crash). ×

For the most part, an AV developer that uses safety drivers is not yet confident that its vehicles can reliably achieve an acceptable level of safety on their own. This is still true even if a vehicle completes a drive without any actual intervention by that safety driver. At least in the United States, alternative explanations for retaining the safety driver—to comply with ostensible legal requirements, to reassure passengers, or to perform nondriving functions—are generally lacking.

At the same time, AV developers might reach different conclusions about the requisite level of safety or the requisite level of confidence in that safety. To use a very limited analogy: A rock climber’s rejection of ropes and harnesses probably says more about the climber’s confidence than about their skill.

Is a person monitoring the AV from outside the vehicle? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

A safety driver might be present near rather than inside a vehicle. For example, a demonstration of a small delivery vehicle that is not designed to carry people may nonetheless involve a safety driver seated in a car that trails the delivery vehicle. Reliance on such a safety driver places a significant technical and economic asterisk on claims about the capabilities of these delivery vehicles. Because reliance on safety drivers also involves reliance on a robust communications system, reliance on them also introduces an additional issue of safety.

Tesla’s recent introduction of its Smart Summon feature also shows why unoccupied does not necessarily mean driverless. 9 9. Introducing Software Version 10.0, TESLA BLOG (Sept. 26, 2019), × This feature does not reach the threshold for automated driving—and certainly not “full self-driving”—because it is designed with the expectation that there will be a human driver who will supervise the vehicle from the outside and intervene to prevent harm. Emphasizing that the user is still a driver may help to temper claims and assumptions that could lead to the dangerous misuse of this driver assistance feature.

Is a person monitoring the AV from a remote center? Why? Are they always paying attention? How can they intervene? How often do they intervene? How are they supervised?

For years, one of the more contentious issues in the automated driving community has involved what might be neutrally termed “remote facilitation of the driving task.” This phrase encompasses a broad spectrum of potential roles performed by actors outside the vehicle—roles that are important to understanding the technical and safety claims made by developers of automotive technologies.

On one side of the spectrum lies remote driving, in which a human driver who may be many miles away from a vehicle uses a communications system to perceive the vehicle’s driving environment and to steer, accelerate, and brake in real time—what SAE J3016 calls “performance of the dynamic driving task.” 10 10. SAE J3016, supra note 5. ×  This remote driving is orthogonal to automated driving (in other words, neither its synonym nor its antonym). Indeed, some automated driving developers skeptical of remote driving are eager to differentiate the two in both language and law.

On the other side of the spectrum lies network monitoring. An automated driving company might maintain a facility in which human agents collectively monitor its AVs, communicate with the users of those vehicles, and coordinate with emergency responders. While stressing that their human agents are not performing the dynamic driving task, some AV developers have been vague about what specifically these agents are and are otherwise not doing.

Journalists, however, can be concrete in their questioning. They can ask whether there is a remote person assigned to or available for each vehicle, what that person does during the vehicle’s normal operation, and what that person does in less common situations. For example, imagine that an AV approaches a crash scene and concludes that it cannot confidently navigate by itself. What role might a remote agent play? Might this person give the vehicle permission to proceed? Might they manually identify roadway objects that the AV could not confidently classify? Might they sketch a rough travel path for the AV to follow if the AV agrees? Might they direct the AV to follow the path even if the AV would otherwise reject it? Or might they actually relay specific steering, accelerating, and braking commands to the AV?

How a company answers these questions can provide insight into the maturity of its automated driving program. If the company uses physically present safety drivers in its deployments (as most still do), then these questions are largely speculative. But if the company plans to remove these safety drivers, then it should have careful and concrete answers. And if the company declines to share these answers, one might reasonably inquire why.

What are specific examples of difficult scenarios in which a person did not intervene? In which a person unnecessarily intervened? In which a person necessarily intervened? What form did this intervention take?

While anecdotes alone are not enough to establish reasonable safety, they can be helpful in measuring progress. An automated driving developer that has been testing its vehicles will have stories about unusual situations that those vehicles (and their safety drivers) encountered. Many of these developers may be happy to share situations that the automated vehicle handled or could have handled without intervention. But pairing these with situations in which human intervention was necessary provides important context. And a company’s willingness to share these more challenging stories demonstrates its trustworthiness.

At any moment, what is the ratio between the number of people who are monitoring and the number of AVs that are deployed?

Economic feasibility offers another metric for automated driving—and one that is intertwined with technical feasibility. Economically, automated driving is both attractive and controversial in large part because, true to its name, it promises to reduce the need for human drivers. Asking whether this is in fact happening—that is, whether the ratio of human monitors to automated vehicles is less than 1.0—is another way to assess the technical progress of an automated driving program.

This may be especially helpful with respect to pilot projects involving specialized vehicles traveling at low speeds in limited areas such as airports, downtowns, and shopping malls. There have been and will likely continue to be numerous announcements about these projects across the country. But so long as these vehicles are deployed with at least one safety driver on board, their economic viability is unclear. After all, their hosts could have achieved (and could still achieve) the same functional benefits by simply deploying conventional fleets.

Technical definitions

What level of automation corresponds to the design intent for the AV? What level of automation corresponds to how the AV is actually being operated?

Automated driving developers are almost certainly familiar, though not necessarily proficient, with the levels of driving automation defined in SAE J3016. They may even reference these levels in their announcements—correctly or not. Understanding the levels may help to assess the claims.

Most automated driving development is focused on levels 3 and 4. On one side, levels 0, 1, and 2 are in fact driver assistance rather than automated driving, and a credible developer should not suggest otherwise. After all, features at these levels only work unless and until they don’t, which is why a human driver is still needed to supervise them. On the other side, level 5 describes a feature that can operate everywhere that humans can drive today. But while this is the hope of many automated driving developers, it remains a distant one.

A confusing quirk in the levels of automation is the difference between what I call an aspirational level and what I call a functional level. The aspirational level describes what an automated driving developer hopes its system can achieve, whereas the functional level describes what the automated driving developer assumes its system can currently achieve. For example, most developers of low-speed automated shuttles envision level 4 automated driving, which would not require a human driver for safe operation. But most of these developers still keep their systems under the supervision of human safety drivers who are expected to pay attention, which corresponds to level 2 rather than level 4. Nonetheless, because SAE J3016 focuses on design intent, developers of these systems correctly characterize them as level 4 (the aspirational level) rather than level 2 (the functional level). 11 11. SAE J3016, supra note 5 (explaining the developer of a feature determines its level of automation). ×

Similarly, California’s Department of Motor Vehicles considers automated vehicles that are merely being tested to be “autonomous” even though their safe operation still requires a human safety driver. 12 12. C.f., Key Autonomous Vehicle Definitions, STATE OF CALIFORNIA DEPARTMENT OF MOTOR VEHICLES, initions (last visited March 9, 2020) (The California DMV defines an “autonomous test vehicle” as “a vehicle that has been equipped with technology that is a combination of both hardware and software that, when engaged, performs the dynamic driving task, but requires a human test driver or a remote operator to continuously supervise the vehicle’s performance of the dynamic driving task.”). × Otherwise, rules requiring a safety driver absent specific permission otherwise would apply to a null set. Because of this interpretation, companies that are testing or deploying automated driving features in California must comply with these rules, while companies that are testing or deploying mere driver assistance features need not. This is why Uber needed permission to test its automated vehicles in California, but Tesla did not need permission to make its Autopilot or Smart Summon driver assistance features available in that state. 13 13. This was understandably frustrating for Uber. See Anthony Levondowski, Statement on Self-Driving in San Francisco, (Dec. 17, 2016) (transcript available at Uber Newsroom). But see Bryant Walker Smith, Uber vs. the Law, THE CENTER FOR INTERNET AND SOCIETY: BLOG (Dec. 17, 2016), ×  Yet, as these examples suggest, testing an automated driving feature is in many ways technically indistinguishable from using a driver assistance feature.

Asking about the aspirational level of automation invites a company to make a public characterization that has marketing and regulatory implications. And asking about the functional level of automation invites a company to temper its aspirations with the current limitations of its technologies.

References to the levels of automation may be helpful in discussions with companies but are generally not necessary or even helpful when reporting to the public. Instead, key phrases can more clearly communicate the current state of a given technology. Three of the most important are:

  • “A driver assistance feature that still requires a human driver to pay attention to the road” (levels 1 and 2)
  • “A vehicle that is designed to drive itself but needs a safety driver until it can reliably do so” (aspirational level 4)
  • “A vehicle that drives itself without the need for a safety driver” (functional level 4)

In what environment is the AV operating? On roads open to other motor vehicles? To bicyclists? To pedestrians?

Automated vehicles have been a reality for decades: They are called elevators, escalators, people movers, and automated trains. But whereas these vehicles operate in highly controlled environments, automated motor vehicles are particularly challenging in large part because the driving environments they will face are so challenging.

Below level 5, however, these driving conditions are limited. SAE J3016 terms these driving conditions the operational design domain, 14 14. See SAE J3016, supra note 5. ×  and this ODD is essential to defining an AV’s capabilities. For example, some automated driving features may operate only on freeways, and some AVs may be restricted to certain low-speed routes within certain neighborhoods. Indeed, early automation activities are generally characterized by some combination of slow speeds, simple environments, and supervised operations.

Developers should be upfront about these limitations in their announcements—and if they are not, reporters should ask whether and how the AVs mix with other road users, including pedestrians, bicyclists, and conventional drivers. There is a big difference, for example, between deploying in complex mixed traffic and deploying on a dedicated route with no other traffic.

As an aside: State vehicle codes apply to public roads, and they may also apply to private facilities such as parking garages and private roads that are nonetheless open to the public. 15 15. See, e.g., N.Y. VEH. & TRAF. LAW § 1100(a) (McKinney 2019) (“The provisions of this title apply upon public highways, private roads open to public motor vehicle traffic and any other parking lot, except where a different place is specifically referred to in a given section.”). × For this reason, AVs that are deployed only in privately controlled areas may still have to comply with state laws generally applicable to motor vehicles as well as state laws specific to AVs. Similarly, these laws may (or may not) also apply to delivery robots that travel on sidewalks and crosswalks. 16 16. E.g., N.Y. VEH. & TRAF. LAW § 144 (McKinney 2019) (“Sidewalk. That portion of a street between the curb lines, or the lateral lines of a roadway, and the adjacent property lines, intended for the use of pedestrians.”); id. at 159 (McKinney 2019) (“Vehicle. Every device in, upon, or by which any person or property is or may be transported or drawn upon a highway, except devices moved by human power or used exclusively upon stationary rails or tracks.”). ×  Developers that suggest otherwise can be asked to explain the basis for their legal conclusion.

What infrastructure, if any, has been changed or added to support the AV in this environment?

Many AV announcements involve specific tests, pilots, or demonstrations that may or may not be easily replicated in another location and scaled to many more locations. An AV that can accept today’s roads as they are—inconsistently designed, marked, maintained, and operated—will be much easier to scale than one that requires the addition or standardization of physical infrastructure. Even if they would be beneficial and practical, infrastructure changes are nonetheless important considerations in evaluating scalability. For this reason, automated driving developers should be asked to identify them.

If the AV perceives that its path is obstructed, what does it do? For example, does it wait for the obstruction to clear, wait for a person to intervene, or plan and follow a new path?

Even infrastructure that is well maintained will still present surprises, and how an AV is designed to deal with these surprises provides some insight into its sophistication. Many early automated vehicles would simply stop and wait if a pedestrian stepped into their path (or a drop of rain confused their sensors). Even today, many AVs rely on frequent human intervention of some kind. This question accordingly invites a developer to describe the true capabilities of its system.


What is the AV’s deployment timeline? For how long will it be deployed? Is this a temporary or permanent service?

Many recent AV announcements have focused less on technical capabilities and more on actual applications, from shuttling real people to delivering real products. These specific applications often involve partnerships with governments, airports, retailers, shippers, or property managers. But it can be unclear whether these applications are one-time demonstrations, short-term pilots, or long-term deployments. Querying—and, in the case of public authorities, requesting records about—the duration of these projects helps to understand their significance.

Who can buy the AV or its automated driving feature? Under what conditions?

There is an important difference between an automated driving developer that is marketing its actual system and a developer that is merely marketing itself. Yet automated driving announcements tend to conflate actual designs, promises of designs, and mere visions of designs. Automakers previewing new vehicle features, shuttle developers announcing new collaborations, and hardware manufacturers touting new breakthroughs all invite the question, “Can I actually buy this vehicle now?”

Who can ride in, receive products or services from, or otherwise use the AV? Under what conditions?

This same logic applies to announcements about services that purportedly involve automated driving. The launch of an automated pizza delivery service open to everyone in a city is much more significant than the staged delivery of a single pizza by a single AV. So too with the automation of long-haul shipping, low-speed shuttles, and taxis. Services that at least part of the public can actually and regularly use are far more significant than one-off demonstrations.

As part of the deployment, who is paying whom? For what?

For the reasons already discussed, the economics of early deployments can be hazy. Why are automated shuttles, each with its own safety driver, more cost-effective than conventional shuttles? Why are automated trucks, each with its own safety driver, more cost-effective than conventional trucks? The financial arrangements with project partners—especially public authorities subject to open records laws—can offer some insight into whether these early deployments provide tangible benefits or are instead largely exploratory or promotional.

What promises or commitments has the developer of the AV made to governments and other project partners?

When project partners are involved for long-term rather than near-term benefit, it can be helpful to query their expectations. Imagine, for example, that an airport or retirement community announces its intent to host automated shuttles that are supervised by safety drivers. When has the developer of these shuttles suggested or promised that safety drivers will no longer be necessary? And who bears the cost of paying these drivers in the interim?

What previous promises, commitments, and announcements has the developer made about their AVs? Have they met them? Do they still stand by them? What has changed, and what have they learned? Why should we believe them now?

Because innovation is unpredictable, claims about deployment timelines may turn out to be incorrect even if they are made in good faith. However, the companies (or people) responsible for these claims should acknowledge that they were wrong, explain why, and temper their new claims accordingly. Reporters should demand this context from their subjects and report it to their audience. Of course, a commercial emphasis on speed and controversy can make this especially challenging, in which case the headline “Company X makes another claim” could at least be used for the more egregious offenders.


Why do the developer of the AV and any companies or governments involved in its deployment think that the deployment is reasonably safe? Why should we believe them?

While the broader topic of AV safety is beyond the scope of this article, it should occupy a prominent place in any automated driving announcement. For years, I have encouraged companies that are developing new technologies to publicly share their safety philosophies—in other words, to explain what they are doing, why they think it is reasonably safe, and why we should believe them. Journalists can pose these same questions and push for concrete answers.

The phrasing of these questions matters. For example, a company might explain that its AV testing is reasonably safe because it uses safety drivers. But it should also go further by explaining why it believes that the presence of safety drivers is sufficient for reasonable safety. Conversely, if a company does not use safety drivers, it should explain why it believes that they are not necessary for reasonable safety. And in answering these questions, the company may also have to detail its own view of what reasonable safety means.

In this regard, it is important to recognize that safety is not just a single test. Instead, it includes a wide range of considerations over the entire product lifecycle, including management philosophy, design philosophy, hiring and supervision, standards integration, technological monitoring and updating, communication and disclosure, and even strategies for managing inevitable technological obsolescence. In this way, safety is a marriage rather than just a wedding: a lifelong commitment rather than a one-time event.

What will the developer of the AV and any companies or governments involved in its deployment do in the event of a crash or other incident?

Safety is not absolute. Indeed, just because an AV is involved in a crash does not mean that the vehicle is unsafe. Regardless, an AV developer should have a “break-the-glass” plan to document its preparation for and guide its response to incidents involving its AVs. (So too should governments.) How will it recognize and manage a crash? How will it coordinate with first responders and investigators? A developer that has such a plan—and is willing to discuss the safety-relevant portions of it—signals that it understands that deployment is about more than just the state of the technologies.


Might the answers to any of these questions change during the deployment of the AV? How and why? What will trigger that change?

This article ends where it began: Automated driving is complex, dynamic, and difficult to predict. For these reasons, many of an AV developer’s answers to the questions identified here could evolve over the course of a deployment. On one hand, the realties of testing or deployment may demand a more cautious approach or frustrate the fulfilment of some promises. On the other hand, developers still hope to remove their safety drivers and to expand their operational design domain at some point. How—and on what basis—will they decide when to take these steps? Their answers can help to shift discussions from vague and speculative predictions to meaningful and credible roadmaps.

I previously blogged on automated emergency braking (AEB) standardization taking place at the World Forum for Harmonization of Vehicle Regulations (also known as WP.29), a UN working group tasked with managing a few international conventions on the topic, including the 1958 Agreement on wheeled vehicles standards.

It turns out the World Forum recently published the result of a joint effort undertaken by the EU, US, China, and Japan regarding AV safety. Titled Revised Framework document on automated/autonomous vehicles, its purpose is to “provide guidance” regarding “key principles” of AV safety, in addition to setting the agenda for the various subcommittees of the Forum.

One may first wonder what China and the US are doing there, as they are not members to the 1958 Agreement. It turns out that participation in the World Forum is open to everyone (at the UN), regardless of membership in the Agreement. China and the US are thus given the opportunity to influence the adoption of one standard over the other through participation in the Forum and its sub-working groups, without being bound if the outcome is not to their liking in the end. Peachy!

International lawyers know that every word counts, and every word can be assumed to have been negotiated down to the comma, or so it is safe to assume. Using that kind of close textual analysis, what stands out in this otherwise terse UN prose? First, the only sentence couched in mandatory terms. Setting out the drafters’ “safety vision,” it goes as follows: AVs “shall not cause any non-tolerable risk, meaning . . . shall not cause any traffic accidents resulting in injury or death that are reasonably foreseeable and preventable.”

This sets the bar very high in terms of AV behavioral standard, markedly higher than for human drivers. We cause plenty of accidents which would be “reasonably foreseeable and preventable.” A large part of accidents are probably the result of human error, distraction, or recklessness, all things “foreseeable” and “preventable.” Nevertheless, we are allowed to drive and are insurable (except in the most egregious cases…) Whether this is a good standard for AVs can be discussed, but what is certain is that it reflects the general idea that we as humans hold machines to a much higher “standard of behavior” than other humans; we forgive other humans for their mistakes, but machines ought to be perfect – or almost so.

In second position: AVs “should ensure compliance with road traffic regulations.” This is striking by its simplicity, and I suppose that the whole discussion on how the law and its enforcement are actually rather flexible (such as the kind of discussion this very journal hosted last year in Ann Arbor) has not reached Geneva yet. As it can be seen in the report on this conference, one cannot just ask AVs to “comply” with the law; there is much more to it.

In third position: AV’s “should allow interaction with the other road users (e.g. by means of external human machine interface on operational status of the vehicle, etc.)” Hold on! Turns out this was a topic at last year’s Problem-Solving Initiative hosted by University of Michigan Law School, and we concluded that this was actually a bad idea. Why? First, people need to understand whatever “message” is sent by such an interface. Language may come in the way. Then, the word interaction suggests some form of control by the other road user. Think of a hand signal to get the right of way from an AV; living in a college town, it is not difficult to imagine how would such “responsive” AVs could wreak havoc in areas with plenty of “other road users,” on their feet or zipping around on scooters… Our conclusion was that the AV could send simple light signals to indicate its systems have “noticed” a crossing pedestrian for example, without any additional control mechanisms begin given to the pedestrian. Obviously, jaywalking in front on an AV would still result in the AV breaking… and maybe sending angry light signals or honking just like a human driver would do.

Finally: cybersecurity and system updates. Oof! Cybersecurity issues of IoT devices is an evergreen source of memes and mockery, windows to a quirky dystopian future where software updates (or lack thereof) would prevent one from turning the lights on, flushing the toilet, or getting out of the house… or where a botnet of connected wine bottles sends DDoS attacks across the web’s vast expanse. What about a software update while getting on a crowded highway from an entry ramp? In that regard, the language of those sections seems rather meek, simply quoting the need for respecting “established” cybersecurity “best practices” and ensuring system updates “in a safe and secured way…” I don’t know what cybersecurity best practices are, but looking at the constant stream of IT industry leaders caught in various cybersecurity scandals, I have some doubts. If there is one area where actual standards are badly needed, it is in consumer-facing connected objects.

All in all, is this just yet another useless piece of paper produced by an equally useless international organization? If one is looking for raw power, probably. But there is more to it: the interest of such a document is that it reflects the lowest common denominator among countries with diverging interests. The fact that they agree on something, (or maybe nothing) can be a vital piece of information. If I were an OEM or policy maker, it is certainly something I would be monitoring with due care.

“Safety.” A single word that goes hand-in-hand (and rhymes!) with CAV. If much has been said and written about CAV safety already (including on this very blog, here and there,) two things are certain: while human drivers seem relatively safe – when considering the number of fatalities per mile driven – there are still too many accidents, and increasingly more of them. 

The traditional approach to safely deploying CAVs has been to make them drive, drive so many miles, and with so few accidents and “disengagements,” that the regulator (and the public) would consider them safe enough. Or even safer than us!  

Is that the right way? One can question where CAVs are being driven. If all animals were once equal, not every mile can be equally driven. All drivers know that a mile on a straight, well-maintained road by a fine sunny day is not the same as a mile drive on the proverbially mediocre Michigan roads during a bout of freezing rain. The economics are clear; the investments in AV technology will only turn a profit through mass deployment. Running a few demos and prototypes in Las Vegas won’t cut it; CAVs need to be ready to tackle the diversity of weather patterns we find throughout the world beyond the confines of the US South-West.

Beyond the location, there is the additional question of whether such “testing” method is the right one in the first place. Many are challenging what appears to be the dominant approach, most recently during this summer’s Automated Vehicle Symposium. Their suggestion: proper comparison and concrete test scenarios. For example, rather than simply aiming for the least amount of accidents per 1000’s of miles driven, one can measure break speed at 35mph, in low-visibility and wet conditions, when a pedestrian appears 10 yards in front of the vehicle. In such a scenario, human drivers can meaningfully be compared to software ones. Furthermore, on that basis, all industry players could come together to develop a safety checklist which any CAV must be able to pass before hitting the road. 

Developing a coherent (and standardized?) approach to safety testing should be at the top of the agenda, with a looming push in Congress to get the AV bill rolling. While there are indications that the industry might not be expecting much from the federal government, this bill still has the possibility of allowing CAVs on the road without standardized safety tests, which could result in dire consequences for the industry and its risk-seeking members. Not to mention that a high-risk business environment squeezes out players with shallower pockets (and possibly innovation) and puts all road users, especially those without the benefit of a metal rig around them, at physical and financial risk were an accident to materialize. Signs of moderation, such as Cruise postponing the launch of its flagship product, allows one to be cautiously hopeful that “go fast and break things” mentality will not take hold in the automated driving industry.

*Correction 9/9/19 – A correction was made regarding the membership to 1958 Agreement and participation at the World Forum.

A European Commission plan to implement the connected car-specific 802.11p “Wi-Fi” standard for vehicle-to-vehicle (V2V) communication was scrapped early July after a committee of the Council of the European Union (which formally represents individual member states’ during the legislative process) rejected it. The standard, also known as ITS-G5 in the EU, operates in the same frequency range as domestic Wi-Fi, now most often deployed under the 802.11n specification.

The reason for this rejection were made clear by the opponents of “Wi-Fi V2V”: telecommunication operators, and consortia of IT equipment and car manufacturers (such as BMW and Qualcomm) would never allow locking out 5G and its ultra-low latency, “vehicle-to-everything” (V2X) solutions. In turn, countries with substantial industrial interest in those sectors (Germany and Finland, to name only two,) opposed the Commission plan.

Yet it appears that Commissioner Bulc had convincing arguments in favor of 802.11p. In her letter to the European Parliament’s members, she stresses that the technology is available now, and can be successfully and quickly implemented, for immediate improvements in road safety. In her view, failure to standardize now means that widespread V2V communication will not happen until the “5G solutions” come around.

5G is a polarizing issue, and information about it is often tainted with various industries’ talking points. It first matters to differentiate 5G as the follow-up on 4G, and 5G as the whole-new-thing-everyone-keeps-talking-about. As the follow up on 4G, 5G is the technology that underpins data delivery to individual cellphones. It operates mostly in higher frequencies than current 4G, higher frequencies which have a lower range and thus require more antennas. That in turn explains why most current cellphone 5G deployments are concentrated in large cities.

The “other” 5G is based on a promise: the higher the frequency, the higher the bandwidth and the lower the latency. Going into the hundreds of GHz, 5G theoretically delivers large bandwidth (in the range of 10 Gbps) in less than 1ms, with the major downside of a proportionally reduced range and ability to penetrate dense materials.

The logical conclusions of these technical limitations is that the high-bandwidth, low-latency 5G, set to revolutionize the “smart”-everything and that managed to gather some excitement will become a reality the day our cities are literally covered with antennas at every street corner, on every lamppost and stop sign. Feasible over decades in cities (with whose money, though?), a V2X world based on a dense mesh of antennas looks wholly unrealistic in lower density areas.

Why does it make sense, then, to kick out a simple, cheap and patent-free solution to V2V communication in favor of a costly and hypothetical V2X?

Follow the money, one would have said: what is key in this debate is understanding the basic economics of 5G. As the deployment goes on, it is those who hold the “Standard Essential Patents” (SEPs) who stand to profit the most. As reported by Nikkei in May 2019, China leads the march with more than a third of SEPs, followed by South Korea, the US, Finland, Sweden and Japan.

If the seat of the V2V standard is already taken by Wi-Fi, that is one less market to recoup the costs of 5G development. It thus does not come as a surprise that Finland was one of the most vocal opponents to the adoption of 802.11p, despite having no car industry – its telecom and IT sector have invested heavily in 5G and are visibly poised to reap the rewards.

Reasonable engineers may disagree on the merits of 802.11p – as the United States’ own experience with DSRC, based on that same standard, shows. Yet, the V2X 5G solutions are nowhere to be seen now, and investing in such solutions was and remains to this day a risky enterprise. Investments required are huge, and one can predict there will be some public money involved at some point to deploy all that infrastructure.

“The automotive industry is now free to choose the best technology to protect road users and drivers” said Lise Fuhr, director general of the European Telecommunications Network Operators’ Association (ETNO) after their win at the EU Council. I would rather say: free to choose the technology that will preserve telcos’ and some automakers’ risky business model. In the meantime, European citizens and taxpayers subsidize that “freedom” with more car accidents and fatalities, not to speak of other monetary costs 5G brings about. The seat will have been kept warm until the day their 5G arrives – if it does – at some point between 2020 and 2025. In the meantime, users will have to satisfy ourselves of with collision radars, parking cameras, cruise control and our good ol’ human senses.

A couple weeks ago, I wrote a post outlining the fledgling legal efforts to address the increasingly urgent privacy concerns related to automated vehicles. While Europe’s General Data Privacy Regulation and California’s Consumer Privacy Act set a few standards to limit data sharing, the US as a whole has yet to seriously step into the field of data privacy. In the absence of national regulation in the United States, this post will look at an industry created standard. The auto industry standard is important not only for its present-day impact on how auto companies use our personal information, but also for the role it is likely to play in influencing any eventual Congressional legislation on the subject.

In 2014, two major industry trade associations – the Alliance of Automobile Manufacturers and the Association of Global Automakers collaborated to create a set of guiding principles for collection and management of consumer data. These twenty automakers, including the “Big Three” in the US and virtually every major auto company around the globe, created a list of seven privacy protection principles to abide by in the coming years.

In the list, two of the principles are somewhat well fleshed out: transparency and choice. On transparency, the automakers have pledged to provide “clear, meaningful information” about things like the types of information collected, why that information is collected, and who it is shared with. For certain types of information, primarily the collection of geolocation, biometric, or driver behavior information, the principles go one step further, requiring “clear, meaningful, and prominent notices.”  When it comes to choice, the industry says that simply choosing to use a vehicle constitutes consent for most types of data collection. Affirmative consent is sometimes required when geolocation, biometric or driver behavior data is shared, but that requirement contains several important exceptions that allow the automaker to share such data with their corporate partners.

The remaining five: respect for context; data minimization, de-identification and retention; data security; integrity and access, and; accountability may serve as important benchmarks going forward. For now, each of these five points contains no more than a handful of sentences pledging things like “reasonable measures.”

These industry-developed privacy protection principles are, for the most part, still pretty vague. The document describing all seven of them in-depth runs a mere 12 pages. In order for the standards to be truly meaningful, much more needs to be known about what constitutes reasonable measures, and in what sorts of situations geolocation, biometric, or driver behavior data can be shared. Furthermore, consumers should know whether the automaker’s corporate partners are bound by the same limits on data sharing to which the manufacturers have held themselves.

Without more detail, it is unclear whether these principles afford consumers any more protections than they would have otherwise had. They are important nonetheless for two reasons. They show that the industry at least recognizes some potential problems with unclear data-sharing rules, and they will likely play a key role in the development of any future legislation or federal regulation on the topic.

The European Parliament, the deliberative institution of the European Union which also acts as a legislator in certain circumstances, approved on February 20, 2019 the European Commission’s proposal for a new Regulation on motor vehicle safety. The proposal is now set to move to the next step of the EU legislative process; once enacted, an EU Regulation is directly applicable in the law of the 28 (soon to be 27) member states.

This regulation is noteworthy as it means to pave the way for Level 3 and Level 4 vehicles, by obligating car makers to integrate certain “advanced safety features” in their new cars, such as driver attention warnings, emergency braking and a lane-departure warning system. If many of us are familiar with such features which are already found in many recent cars, one may wonder how this would facilitate the deployment of Level 3 or even Level 4 cars. The intention of the European legislator is not outright obvious, but a more careful reading of the legislative proposal reveals that the aim goes much beyond the safety features themselves: “mandating advanced safety features for vehicles . . .  will help the drivers to gradually get accustomed to the new features and will enhance public trust and acceptance in the transition toward autonomous driving.” Looking further at the proposal reveals that another concern is the changing mobility landscape in general, with “more cyclists and pedestrians [and] an aging society.” Against this backdrop, there is a perceived need for legislation, as road safety metrics have at best stalled, and are even on the decline in certain parts of Europe.

In addition, Advanced Emergency Braking (AEB) systems have been trending at the transnational level, in these early months on 2019. The World Forum for Harmonization of Vehicle Regulations (known as WP.29) has recently put forward a draft resolution on such systems, in view of standardizing them and making them mandatory for the WP.29 members, which includes most Eurasian countries, along with a handful of Asia-Pacific and African countries. While the World Forum is hosted by the United Nations Economic Commission for Europe (UNECE,) a regional commission of the Economic and Social Council (ECOSOC) of the UN, it notably does not include among its members certain UNECE member states such as the United States or Canada, which have so far refused to partake in World Forum. To be sure, the North American absence (along with that of China and India, for example) is not new; they have never partaken in the World Forum’s work since it started its operations in 1958. If the small yellow front corner lights one sees on US cars is not something you will ever see on any car circulating on the roads of a W.29 member state, one may wonder if the level of complexity involved in designing CAV systems will not forcibly push OEMs toward harmonization; it is one thing to live with having to manufacture different types of traffic lights, and it is another one to design and manufacture different CAV systems for different parts of the world.

Yet it is well known that certain North American regulators are not a big fan of such approach. In 2016, the US DoT proudly announced an industry commitment of almost all car makers to implement AEB systems in their cars, with the only requirement that such systems satisfy set safety objectives. If it seems like everyone would agree that limited aims are sometimes the best way to get closer to the ultimate, bigger goal, the regulating style varies. In the end, one must face the fact that by 2020, AEB systems will be harmonized for a substantial part of the global car market, and maybe, will be so in a de facto manner even in North America. And given that the World Forum has received a received a clear mandate from the EU – renewed as recently as May 2018 – to develop a global and comprehensive CAV standard, North American and other Asian governments who have so far declined to join the W.29 might only lose an opportunity to influence the outcome of such CAV standards by sticking to their guns.

Recently, I wrote about the prospects for federal legislation addressing connected and autonomous vehicles. While the subject will be taken up in the new Congress, the failed push for a bill at the end of 2018 is an indication of the steep hill any CAV legislation will have to overcome. Despite the lack of federal legislation, the Department of Transportation (DOT) has been active in this space. In October 2018, the Department issued Preparing for the Future of Transportation: Automated Vehicles 3.0, DOT’s most comprehensive guide to date outlining their plan for the roll-out of CAVs. The document indicates that the department expects to prioritize working with industry to create a set of voluntary safety standards over the development of mandatory regulations.

Given the Trump administration’s broad emphasis on deregulation as a driver of economic growth, this emphasis on voluntary standards is unsurprising. A handful of consumer groups focused on auto safety have raised the alarm over this strategy, arguing that mandatory regulations are the only way to both ensure safety and make the general public confident in automated driving technology.

The remainder of this post will discuss the effectiveness of voluntary safety standards relative to mandatory regulation for the CAV industry, and consider the prospects of each going forward. While little information is available about the response to either option in the CAV field, I will seek to draw lessons from experience with regulation of the traditional automobile industry.

The National Highway Transportation Safety Administration (NHTSA) has undergone a dramatic strategic shift over its half-century existence. In its early days, NHTSA was primarily devoted to promulgating technology-forcing regulations that sought to drive innovation across the industry. Jerry Mashaw and David Harfst have documented the agency’s shift away from adopting regulations in favor of an aggressive recall policy for defective products in the 1980s. As they write, the agency then returned to a regulatory policy in the 21st century. However, rather than attempt to force technology, they chose to mandate technologies that were already in use across most of the auto industry. While these new standards still took the form of mandatory regulation, they operated as virtually voluntary standards because they mandated technologies the industry had largely already adopted on its own. Mashaw and Harfst found that this shift was essentially a trade-off of slower adoption of new safety technology, and potentially lost lives, in favor of greater legitimacy in the eyes of the courts and industry. Particularly given the rise of pre-enforcement judicial review of regulations, this shift may be seen as a defensive mechanism to allow more regulations to survive court challenges.

Even as NHTSA has pulled back from technology forcing regulations, there has been no sustained public push for more aggressive auto safety regulation. This may be because the number of traffic fatalities has been fallen slightly in recent decades. This shift is likely due more to a reduction in drunk driving than improved technology. With studies showing that the public is particularly wary of CAV adoption, it remains to be seen whether NHTSA will seek to return to its technology-forcing origins. While the auto industry has traditionally preferred voluntary adoption of new technologies, it may be the case that government mandates would help ease public concern about CAV safety, speed the adoption of this new technology, and ultimately save lives.

To date, twenty-nine states have enacted legislation related to connected and autonomous vehicles (CAVs). Eleven governors have issued executive orders designed to set guidelines for and promote the adoption of CAVs. In response to this patchwork of state laws, some experts have argued that the federal government should step in and create a uniform set of safety regulations.

Partially responding to such concerns, the National Highway Traffic Safety Administration (NHTSA) issued A Vision for Safety 2.0 in September, 2018. The guidance document contains voluntary guidance for the automotive industry, suggesting best practices for the testing and deployment of CAVs. It also contains a set of safety-related practices for states to consider implementing in legislation.

The NHTSA document is likely to have some effect on the development of safety practices for the testing and deployment of automated vehicles. While not mandatory, the guidance does give the industry some indication of what the federal government is thinking. Some companies may take this document as a sign of what they will be required to do if and when the Congress passes CAV legislation, and begin to prepare for compliance now. Furthermore, this nudge from the federal government could influence state action, as legislators with limited expertise on the subject look to NHTSA for guidance in drafting their CAV bills.

Without new legislation however, the force of NHTSA’s guidance will be blunted. No manufacturer is required to follow the agency’s views, and state legislatures are free to continue passing conflicting laws. Such conflicts among states could make it difficult to design a vehicle that is able to meet all state standards and travel freely throughout the country. To date, this has not been an acute problem because CAVs, where they are deployed, operate only within a tightly limited range. As use of these vehicles expands however, uniform standards will begin to appear more necessary.

A late push for CAV legislation in the last Congress petered out in the December lame duck session. After unanimously passing the House in 2017, the bill stalled when Senate Democrats balked at what they saw as its lack of sufficient safety protections. With Congress’ schedule blocked by the government shutdown, CAV legislation has been put on the back burner so far in 2019. At some point though, Congress is likely to take up a new bill. The Senators who were key drivers of the CAV bill in the past Congress, Gary Peters (D-MI) and Jon Thune (R-SD) remain in the Senate. Both Senators retain their influential positions on the Committee on Commerce, Science, and Transportation. The key change from the previous Congress will be the dynamic in the newly Democratic-controlled House. While a bill passed unanimously last term, it remains to be seen whether the new House will be held back by the same consumer safety concerns that led the Senate to reject the bill last term.

As autonomous vehicle technology continues to march forward, and calls for a uniform nationwide regulatory system are expected to grow. We will be following major developments.

Transportation as we know it is changing dramatically.  New technology, new business models and new ways of thinking about how we move are being announced almost daily.  With all this change, come inevitable questions about legality, responsibility, and morality.  Lawyers and policy makers play a leading role in answering these challenging questions.  The newly launched Journal of Law and Mobility, will serve an important role as the leading source for scholarship, commentary, analysis, and information, and enable a meaningful dialogue on a range of mobility topics.

In order to facilitate this needed dialogue, it is important at the outset that we ground ourselves in the terminology used to describe “mobility.”  There are a lot of terms being used by different people in the industry, government and media that can be confusing or ambiguous to those not familiar with the technology.  Terms such as “semi-autonomous,” “highly automated” or “connected and automated vehicles” can describe a wide range of vehicles, from “self-driving cars” that actually have self-driving capability, to cars that are connected and communicating with each other, but have lower levels of automation that provide assistance to drivers.

It is very important that we are clear and concise when having a discussion about mobility, because while there are common issues in each area, there are many unique aspects of each technology that merit different discussion.  Fortunately, we have a framework that helps us have clearer discussion about automated technology, the SAE levels of driving automation.  This document describes 6 levels of automation, from Level 0 – no automation, to Level 5 – full automation, and the responsibilities associated with each level of automation in terms of monitoring and executing the Dynamic Driving Task (DDT).  The SAE taxonomy has become so widespread, that even governmental entities such as the National Highway Traffic Administration (NHTSA) and the California Department of Motor Vehicles (CA DMV) are utilizing these levels of automation in their policy statements and rulemaking.

The CA DMV went even further, and specifically regulates the use of certain terminology.  In their Driverless Testing Regulations issued in February, 2016, they specifically require that “no manufacturer or its agents shall represent in any advertising for the sale or lease of a vehicle that a vehicle is autonomous” unless it meets the definition of SAE Levels 3-5.

Lawyers know the importance of words for legal purposes, but terminology is also important for consumers, particularly for building the trust that will be required for successful deployment of self-driving vehicles.  There is already some data suggesting that consumers are confused, for example a finding from an MIT AgeLab survey question that asked respondents if self-driving vehicles are available for purchase today, with nearly 23% saying “yes” – despite the fact that no Level 3 or higher vehicle is actually for sale yet.

NHTSA’s 2017 policy statement addresses this concern, it includes “Consumer Education and Training” as one of the twelve safety design elements of the Voluntary Safety Self-Assessments it suggests that manufacturers complete, citing a need for explicit information on system capabilities to minimize potential risks from user system abuse or misunderstanding.  Legislation that passed the House last year, the SELF DRIVE Act, would take this a step further by mandating that the Department of Transportation (DOT) do research to determine the most effective method and terminology for informing consumers about vehicle automation capabilities and limitations, including the possible use of the SAE levels.

SAE is not the only organization to tackle this problem, there are similar definitions developed in Europe by the German Association of the Automotive Industry (VDA) and the Germany Federal Highway Research Institute (BASt).  Whether we utilize one of these definitional frameworks or not, what is most important is that we are specific about what we are discussing, to enable clear and effective dialogue as we endeavor to solve the important issues ahead.