January 2022

Are you familiar with SAE J3016, the recommended practice that defines, among many other terms, the widely (mis)cited levels of driving automation? You can be! You could read one of the many purported (and often erroneous) summaries of it. You could read my short gloss on it. Or you could read the actual current document by downloading it for free from SAE’s website. Go ahead. I’ll wait.

Let me know when you reach the part about the minimal risk condition (MRC). It’s on page 15 and 38—but don’t skip ahead. After all, you don’t want to miss the important parts about failure mitigation strategy and fallback on page 10. Knowing that “ODD” refers to “operational design domain” will help shortly. Also, all those “deprecated terms” are fun. Okay, ready?

So, as you have now read, MRC is basically where a vehicle stops (and what the vehicle does while stopped) “when a given trip cannot or should not be continued.” At levels 4 and 5, the automated driving system (ADS) rather than the human driver is expected to achieve an MRC.

Years ago, I made two proposals for more clearly defining MRC. The first proposal was to develop a hierarchy of conditions that, depending on circumstances, might potentially qualify as “minimal risk”: stopping in place, in an active lane of traffic, on a narrow shoulder, on a wide shoulder, in a parking lot, in front of a hospital emergency department, and so forth. Happily, there is some progress on this.

My second proposal involved rethinking the relationship between the MRC and the automated driving system (ADS). The current version of J3016 explains that the “characteristics of automated achievement of a minimal risk condition at Levels 4 and 5 will vary according to the type and extent of the system failure, the ODD (if any) for the ADS feature in question, and the particular operating conditions when the system failure or ODD exit occurs.”

My proposal: J3016 should instead define both “attainable MRC” and “expected MRC,” where:

  • “Minimal risk condition” is a vehicle state that reduces the risk of crash when a given trip cannot or should not be completed.
  • Attainable MRC (AMRC) is that state that can be achieved given constraints in the ADS, in the rest of the vehicle, and in the driving environment.
  • Expected MRC (EMRC) is that state that should be achieved given constraints in the rest of the vehicle (excluding the ADS) and in the driving environment—but not in the ADS.

Whereas “attainable MRC” corresponds to how “MRC” is currently defined in J3016, “expected MRC” corresponds to how MRC is often used both outside and even inside J3016.

The key difference is that the EMRC is a function of the vehicle and the driving environment—but not a function of the ADS itself. In other words, the EMRC is a stable condition that is reasonable for the instant vehicle in its instant environment regardless of the intended capability or the current functionality of the instant ADS. If a level 4 feature (more on this in a moment) cannot achieve this EMRC, then it has failed. This does not mean that the ADS should just drive the vehicle off a cliff: It should still attempt to reduce risk, but the resulting AMRC is unlikely to be as safe as the EMRC.

Let’s get concrete. (AC or PCC? Bad joke. Sorry.) A vehicle that stops on a wide shoulder (outside of an active travel lane) with its emergency flashers engaged has reached an MRC as this term is generally understood. The present difficulty comes not from potential MRCs that are more ambitious (such as returning to a maintenance depot) but from potential MRCs that are less ambitious.

In particular: Can being stopped in an active travel lane constitute a minimal risk condition? This is where distinguishing between AMRC and EMRC can help. Consider three examples, each featuring a level 4 feature:

First, if the vehicle’s driveshaft suddenly breaks in heavy traffic, then it may be physically impossible for the vehicle to reach a shoulder. In this case, stopping in an active travel lane (while engaging the emergency flashers and calling for help) could constitute both the AMRC and the EMRC. This is the best thing that a functional ADS could do under the vehicular and environmental circumstances (the EMRC), and this ADS can actually do it (the AMRC).

Second, if a blizzard or crash suddenly renders a narrow roadway impassible for conventional and automated vehicles alike, then it may again be physically impossible for the vehicle to safely reach a shoulder. In this case as well, stopping in the active travel lane (while engaging the emergency flashers and calling for help) could constitute both the AMRC and the EMRC. Again, this is the best thing that a functional ADS could do under the vehicular and environmental circumstances (the EMRC), and this ADS can actually do it (the AMRC).

The third example is where my proposal departs from J3016. If an ADS loses critical sensors in a deer strike, then it may be unable to detect whether the vehicle has a clear path from its current travel lane to the shoulder. In that case, stopping in the active travel lane (while engaging the emergency flashers and calling for help) could constitute the AMRC, because it might indeed be the best that the ADS can do under the circumstances.

In this case, however, the EMRC would be stopping on the shoulder rather than in the active travel lane. The AMRC and EMRC differ in this case because the inability to reach a shoulder is due to an ADS constraint rather than to a constraint in the rest of the vehicle or in the driving environment. In short, the ADS itself has failed. This is not necessarily a condemnation of the ADS (although in many of the common hypotheticals a lack of sufficient redundancy may indeed be a shortcoming) but, rather, a recognition that the ADS was unable to effectively position an otherwise functional vehicle in the given driving environment.

We can analogize these three examples to human driving—at least to a point. A human driver might stop in an active travel lane if a driveshaft breaks or a blizzard makes the road impassable. They might also stop in the active travel lane if they suffer a heart attack or are blinded in a deer strike—but only because humans cannot be designed to perform any differently. In contrast, automated driving systems are being designed to accommodate adverse incidents ranging from software malfunction to hardware loss. Again, this is not necessarily a condemnation of the ADS: Functional safety analysis is based on imperfection rather than perfection. But the inevitability—and even the potential acceptability—of that occasional failure does not negate the failure.

Indeed, J3016 has another term—”failure mitigation strategy”—to describe a vehicle’s response to such a failure. Its definition, however, emphasizes that failure mitigation is exclusively a vehicle function rather than an ADS function. And so here I would distinguish between failure mitigation undertaken by a partially incapacitated ADS (which is within J3016’s scope) and failure mitigation undertaken by a vehicle in the event of a fully incapacitated ADS (which, while beyond J3016’s scope, is certainly within the scope of automated driving design and regulation).

This distinction highlights a big disadvantage to my proposal: Drawing a line between an ADS and the rest of the vehicle is tricky both in theory and in practice. For example, is a brake actuator part of the ADS? Does the answer depend on whether the actuator would also be present in a conventional vehicle? What if the class of vehicle is designed rather than retrofitted for driving automation such that all its systems are closely integrated? As the current definition of failure mitigation strategy illustrates, however, J3016 already draws this line. And, conceptually, this line reinforces the potentially helpful notion of an ADS as analogous to a human driver who drives—or, in the language of J3016, “performs the dynamic driving task” for—the vehicle.

There are, fortunately, some advantages to my proposal.

First, defining EMRC independent of ADS design enables the specification (inside or outside J3016) of a basic floor for this concept: When a trip cannot or should not be completed, then the EMRC entails stopping outside of an active travel lane unless the vehicle (excluding its ADS) or the driving environment prevents this. This is true whether the human driver (at or below level 3) or the ADS (at or above level 4) is expected to achieve this EMRC.

Second, because EMRC is not dependent on ADS design, the EMRC for a poorly designed ADS is not less than the EMRC for a well-designed ADS. If an ADS lacks an adequate backup power supply, then loss of primary power does not affect what the EMRC requires but instead affects whether the ADS can actually meet that expectation.

Third, this floor more clearly distinguishes automated driving levels 3 and 4. An ADS developer that describes an automated driving feature as level 4 promises that the ADS will reliably achieve this EMRC. Again, the failure to meet that expectation does not necessarily warrant condemnation or imply misclassification—but this failure is still a failure. In contrast, an ADS developer that describes a feature as level 3 does not promise that the ADS will reliably achieve this EMRC, which is why human fallback is necessary. If a human does not resume actively driving, a level 3 feature may still try to minimize risk but is not expected to achieve the EMRC.

Without this floor, however, the definition of level 4 becomes tautological. If MRC is merely whatever state the given ADS can achieve under its current circumstances, then the ADS always achieves that MRC. But this is the very definition of level 4, which means that level 3 simply ceases to exist. To put this more concretely: Stopping in an active travel lane is the MRC for an ADS that, by virtue of its own design limitations, necessarily stops only in its current travel lane. An ADS that reliably brings its vehicle to a stop—any stop—when a human does not resume actively driving is therefore level 4. And yet the quintessential example of level 3 has long been a feature that, upon a human driver’s failure to resume actively driving, at least stops the vehicle but does not consistently move it to the shoulder.

Distinguishing EMRC and AMRC fixes this. The EMRC is the same regardless of the level of automation, and the difference is where the expectation falls: At level 3 the human driver is expected to achieve the EMRC, and at level 4 the ADS is expected to achieve the EMRC.

In this way, EMRC more correctly aligns with how J3016 itself currently uses the term “minimal risk condition” to distinguish an ADS that may need a human driver to achieve an acceptable level of risk (i.e., level 3) from an ADS that does not (i.e., level 4).

AMRC, in contrast, would correspond to how J3016 currently defines “minimal risk condition.” This is useful in other ways. For example, it is important to know that stopping in an active travel lane may be the AMRC of a level 3 feature in the absence of human intervention or of a level 4 feature in the presence of a catastrophic malfunction. While EMRC is used to define, AMRC can be used to describe.

Yes, this is complicated. And J3016 is criticized for being complicated enough already. But this is a complicated topic. While public-facing summaries can and should be simplified, the underlying technical definitions need the nuance that credibility and utility demand.

(As an aside: I also recognize that J3016 is long and complicated largely because of level 3. But removing level 3 from the document because it is complex and controversial makes no more sense than removing certain words from a dictionary because they too are complex and controversial. Love it or hate it, level 3 represents a design that some automakers are pursuing. We need terms and concepts with which to discuss and debate it.)

Language matters. In a prior revision, the J3016 authors appropriately recognized the difference between individuals (e.g., “dispatchers”) and companies (e.g., “dispatching entities”). And I hope that the next version also recognizes the difference between a feature’s aspirational level of automation and its functional level of automation—a difference with legal significance. But these are topics for future posts and publications.

And so until then: Automated driving systems never die; they just fallback.

This blog is the forth in a series about facial recognition software in various forms of public and private means of transportation, as well as the greater public policy concerns of facial recognition tools. More posts about the relationship between transportation technology, FRS, and fundamental rights will follow.

As we wrote on October 27th, one of the primary dangers of facial recognition software (FRS) is its impact on the freedom of assembly guaranteed by the First Amendment. FRS is another example of the importance of mobility and surveillance to democratic freedom. Scholars have long emphasized that increased mobility via greater transport options provides an ex ante boost to personal freedom, whereas the electronic or physical marks left by transport can give rise to surveillance and control, which provides an ex post decline in freedom. Thus, while greater mobility can be a boon to democratic liberty, the consequent greater potential surveillance of movement using that transport can counteract that by suppressing the freedom of assembly and association. Therefore, it is crucial to the future of democracy and our constitutional rights that leaders in the mobility space remain aware of potential chilling effects on protected First Amendment activity.

At its zenith, FRS can fuel a surveillance state where the government can locate and identify its citizens, and use those tools to shape every aspect of public life. For years, the Chinese government has used this technology to track and surveille nearly all of its 1.4 billion citizens. If that is not scary enough, the regime has utilized the technology in its genocidal campaign to control, incarcerate, and destroy the Uighurs.

Of course, it’s difficult to imagine this type of situation in the US. However, surveillance  during transportation and in public places is already a fact of life for many Americans; FRS is already a tool used by the Detroit, Chicago, and Pittsburgh police departments. 

A stone’s throw from us at the Law & Mobility Program, Project Green Light Detroit (PGL) is a public-private program by which local businesses set up cameras with video-feeds viewable in real-time by the DPD. The goal is to improve neighborhood safety by speeding up police response times to at-risk locations (inside and outside liquor stores, gas stations, restaurants, medical clinics, and houses of worship). Some of these video cameras may also be connected to a face surveillance system, enabling them to record not only what is happening at a given location, but who is at that location at any given moment. Touting the program’s effectiveness, DPD reports that violent crime has been reduced 23% year-to-date at all sites and 48% at the original 8 sites compared to 2015. However, some commentators cast doubt on the program’s promise, citing research that mass surveillance programs like this generally have only mixed impact if increased in scale.

Moreover, the existence of this capacity to monitor people while they travel through their days may also chill legitimate and valuable speech, running afoul of the First Amendment. Surveillance enables the state to see who citizens associate with and the speech they make or even plan to make. The Supreme Court has found that such requirements to disclose speech and assembly may create an unnecessary risk of discouraging speech, and thus are often unconstitutionally vague and overbroad. Ams. for Prosperity Found. v. Bonta, 141 S. Ct. 2373, 2388 (2021). Such state actions that can indirectly chill speech trigger exacting scrutiny, which requires the policy be narrowly tailored to a sufficiently important governmental interest, although the policy need not necessarily be the least restrictive means of promoting that interest. Id. at 2383-84. In Bonta, the Supreme Court found that California compelling charitable organizations’ disclosure of their Schedule Bs in order to investigate charitable misconduct was facially unconstitutional. Id. at 2378. The court found that the law was essentially “a dragnet for sensitive donor information from tens of thousands of charities each year, even though that information will become relevant in only a small number of cases,” there were alternative ways to investigate fraud, and thus the program mostly just made fraud investigations easier and more convenient by keeping the Schedule B information close at hand. Id. at 2387.

In a constitutional challenge, PGL may run into similar problems to California’s disclosure requirements. The program can be characterized as indiscriminate surveillance not narrowly tailored to the interest of crime prevention, since there are many alternative ways to report or show incidents of crime without real-time video-feeds. The interest in faster police response times may even be characterized as administrative in nature, and that argument may become stronger if the aforementioned concerns about effectiveness of the program as it scales up come true. Such a constitutional challenge is becoming more likely, as FRS receives more scrutiny for its impact on civil rights from commentators and activists. For instance, a Michigan man recently filed a federal lawsuit against the DPD for wrongfully arresting and jailing him based on flawed and racially biased FRS, and civil rights organizations have petitioned the government to prevent using FRS. PGL has already been put to use on disfavored speech; in summer 2020, the media reported allegations that PGL was used on crowds of Black Lives Matter protestors to identify people not social distancing and fine them for violating the governor’s health order, and to identify those with questionable immigration status.

Besides the legal issues, FRS programs like PGL may face resistance from the communities they are supposed to serve. FRS does not just identify criminals; it identifies all people. Thus, PGL not only allows the DPD to identify a mugger outside a liquor store, it also allows the government to watch people in deeply personal moments where they expect privacy, such as going to pray at a house of worship, obtain an abortion at a medical clinic, or receive counseling at a drug rehabilitation center. Protests have been mounted against PGL, and surveys of 130 residents of three cities(Detroit, Los Angeles, and Charlotte) indicated that these people want to be seen but not watched, and expressed discomfort with FRS due to intrusion on their privacy. Therefore, installing FRS in our transportation networks may contribute to making our daily lives safer, but also provides great power that may not only run afoul of constitutional freedoms but also disquiet people’s notions of privacy.