The California DMV recently released several 2019 reports from companies piloting self-driving vehicles in California. Under state law, all companies actively testing autonomous vehicles on California public roads must disclose the number of miles driven and how often human drivers were required to retake control from the autonomous vehicle. Retaking control is known as “disengagement.” The DMV defines disengagements as:

“[D]eactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”

Because of the proprietary nature of autonomous vehicle testing, data is not often publicly released;  this is one of the few areas where progress data is made publicly available. The 60 companies actively testing in California cumulatively traveled 2.88 million miles in 2019. The table below reports the various figures for some of the major testers in California.

Company Vehicles Active in CA Miles Driven in 2019 Engagements Engagements per 1,000 miles Average Miles Between Engagements
Waymo 153 1.45 Million 110 0.076 13,219
GM Cruise 233 831,040 68 0.082 12,221
Apple 66 7,544 64 8.48 118
Lyft 20 42,930 1,667 38.83 26
Aurora ? 13,429 142 10.57 95
Nuro 33 68,762 34 0.494 2,024 22 174,845 27 0.154 6,493
Baidu 4 108,300 6 0.055 18,181
Tesla 0 0 0 0 0

What these numbers make clear is that there are several contenders who have made significant progress in the autonomous vehicle space, and there are some contenders which are not yet so competitive. Companies like Waymo, GM Cruise, and Baidu (which also tests extensively in China) have made incredible progress in decreasing the frequency at which a driver must engage with an automated vehicle. Others, like Apple, Lyft, and Aurora, while making progress, are nowhere near as sophisticated in avoiding engagements yet. Noticeably Tesla, the manufacturer frequently in the news for its “Autopilot” feature, does not test on public roads in California. The company says it conducts tests via simulation, on private test tracks, public roads around the world, and “shadow-tests” by collecting anonymized data from its customers during normal driving operations.

What these numbers seem to illustrate is that the autonomous vehicle industry is not all on par, as many often believe. It is often said that Henry Ford did not conceive the idea of an automobile; he perfected it. Similarly, companies like Waymo or GM may be the first to perfect autonomous vehicles, and gain an incredible market advantage once they do so. They are striving to be the Ford’s in this space, while others look like they’re still manufacturing carriages. However, despite these impressive numbers from a select few, the companies themselves think these metrics “do[] not provide relevant insights” (per Waymo) and that the idea that they give any “meaningful insight . . . is a myth” (per GM Cruise).

Why are the head and shoulder leaders on these metrics saying that they provide very little indication of progress on the technology? Disengagement reports may not be the best way for these companies to build trust and credibility in their products. They are only transparent in that they provide some data with no detail or context.

I was having a conversation about these disengagement numbers with a colleague* this week, and the topic of driver distraction arose. In the CA tests, the driver is constantly alert. Once these vehicles are in use for the general public, a notification to engage may not be effective if the driver is distracted. One reason these numbers do not provide particularly useful information is that for the metrics to be useful, at least two things must be true:

  • If the vehicle does not indicate it needs to disengage, no technical errors have been made; and
  • The driver is paying attention and can quickly engage when necessary.

In California testing, the drivers behind the vehicle are always alert and ready to take over. They may take over when the vehicle indicates they must, because of a malfunction or poor conditions. The driver can also engage when the vehicle has done something incorrectly, yet does not indicate that the driver needs to take over. This could include veering into a lane or failing to recognize a pedestrian.

One of the allures of autonomous vehicles is that a driver may not need to be 100 percent engaged for the vehicle to function correctly. However, current technology has not yet achieved this  result, as reiterated this past week by the National Transportation Safety Board (NTSB). The NTSB is an independent federal agency, which lacks enforcement power, but makes recommendations which are considered thorough and are taken seriously by policymakers.

The NTSB put forward many findings on Tuesday, February 25th regarding a Tesla crash that killed a California driver in March 2018. (A synopsis of the NTSB report and findings can be found here.) The facts of the crash involved driver of a Tesla in Autopilot mode, which struck a barrier between the highway and a left exit lane. NTSB found that the Tesla briefly lost sight of the lines marking the highway lane, and started to follow the right-most lane marker of the exit lane (because of fading on the highway lines) caused the vehicle to enter the “gore area.” This same action had apparently occurred several times in this exact vehicle, but the driver on previous trips was paying attention and was able to correct the vehicle. This time, the driver was playing a mobile game and did not correct the vehicle, causing the crash. Here was how NTSB presented three of their findings:

The Tesla’s Autopilot lane-keeping assist system steered the sport utility vehicle to the left into the neutral area of the gore, without providing an alert to the driver, due to limitations of the Tesla Autopilot vision system’s processing software to accurately maintain the appropriate lane of travel. (emphasis added)

The driver did not take corrective action when the Tesla’s Autopilot lane-keeping assist system steered the vehicle into the gore area, nor did he take evasive action to avoid the collision with the crash attenuator, most likely due to distraction by a cell phone game application. (emphasis added)

The Tesla Autopilot system did not provide an effective means of monitoring the driver’s level of engagement with the driving task.

Here we see a combined failure of both (1) and (2) presented above, combined with an inability to adequately monitor driver engagement. The vehicle took an action which it assumed to be correct, and thus did not notify the driver to take over. This combined with the driver not paying attention, failing to notice the need to disengage, and resulted in the crash. This tragic accident highlights that the AV industry still has many areas to improve before higher SAE level vehicles are ready for mass adoption. (The ADAS on the Tesla was SAE Level 2)

As I discussed last week, the federal Department of Transportation has taken a rather hands-off approach to regulation of automated vehicles, preferring to issue guidance rather than mandatory regulations. The  National Transportation Safety Board (NTSB) criticized this approach in their Tesla crash findings. The NTSB wrote that there has been “ Insufficient Federal Oversight of Partial Driving Automation Systems.”

The US Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) have taken a nonregulatory approach to automated vehicle safety. NHTSA plans to address the safety of partial driving automation systems through enforcement and a surveillance program that identifies safety-related defect trends in design or performance. This strategy must address the risk of foreseeable misuse of automation and include a forward-looking risk analysis.

Because the NTSB lacks enforcement power, it cannot compel industry actors or other government agencies to take any action. It can only perform investigations and make recommendations. NTSB Chairman Robert Sumwalt had much to say regarding distracted driving, the AV industry, and the lack of government regulations in the hearing on Tuesday, February 25th.

“In this crash we saw an over-reliance on technology, we saw distraction, we saw a lack of policy prohibiting cell phone use while driving, and we saw infrastructure failures, which, when combined, led to this tragic loss,”

“Industry keeps implementing technology in such a way that people can get injured or killed . . . [I]f you own a car with partial automation, you do not own a self-driving car. Don’t pretend that you do.”

“This kind of points out two things to me. These semi-autonomous vehicles can lead drivers to be complacent, highly complacent, about their systems. And it also points out that smartphones manipulating them can be so addictive that people aren’t going to put them down,”

Chairman Sumwalt is right to be frustrated. The DOT and NHTSA have not regulated the AV industry, or ADAS as they should. Tragic accidents like this can be avoided through a variety of solutions; better monitors of driver engagement than torque-sensing steering wheels, lock-out functions for cell-phones when driving, stricter advertising and warning regulation by companies offering ADAS. Progress is being made in the AV industry, and automated vehicles are getting smarter and safer every day. But incidents like this that combine a failure of technology, regulation, and consumer use, do not instill public confidence in this incredible technology that will be beneficial to society. It only highlights how much farther we still have to go.

*I would like to thank Fiona Mulroe for the inspiration to take this approach to the disengagement report

Cars are getting smarter and safer. And yet this new breed of automobile remains inaccessible to large parts of the consumer base due to high costs. Some of these costs are a natural result of technological advancements in the automobile industry. Others however may be a product of inefficient market dynamics among car manufacturers, insurers and technology companies – which ultimately contribute to a reduced state of safety on our roads.

Automated Driver Assistance Systems (ADAS) that equip cars with services like autonomous braking systems, parking assistance, and blind spot detection are growing at an exponential rate. The global ADAS market size was estimated to be around $14.15 billion in 2016. Since then, it has witnessed a high rate of growth and is expected to reach $67 billion by 2025. Not only is this good news for ADAS developers, it can also significantly increase road safety. The Insurance Institute for Highway Safety estimates that the deployment of automatic emergency braking in most cars on the road, for instance, can prevent 28,000 crashes and 12,000 injuries by 2025.

The biggest roadblock to the easy adoption of ADAS-equipped cars remains its prohibitive cost. Lower rates of adoption not only reduce the overall safety of cars on the road, but also disproportionately affect poorer people. Unsurprisingly, a study in Maryland found that individuals at the upper end of the socioeconomic spectrum have greater access to vehicle safety features leaving those at the lower end at higher risk.

A significant contributing factor to the continued high cost of automated vehicles is the high rate of car insurance. This seems rather counter intuitive. The technological evolution of safety systems reduces the risk of car crashes and other incidents. Consequently, this was expected to cause a decline in insurance premiums. And yet, costs remain high. Insurance companies have resisted the demands for lowering the cost of premiums claiming that the data about ADAS systems and their efficacy in reducing risk is just not conclusive. Moreover, the industry claims, that even if ADAS systems can cause a reduction in the number of vehicular incidents, each incident involving an automated car costs more because of the sophisticated and often delicate hardware such as sensors and cameras installed in these cars. As the executive vice president of Hanover Insurance Group puts it, “There’s no such thing as a $300 bumper anymore. It’s closer to $1,500 in repair costs nowadays.”

There is no doubt that these are legitimate concerns. An industry whose entire business model involves pricing risk can hardly be blamed for seeking more accurate data for quantifying said risk. Unfortunately, none of the actors involved in the automated vehicle industry are particularly forthcoming with their data. At a relatively nascent stage, the AV industry is still highly competitive with large parts of operations shrouded in secrecy. Car manufacturers that operate fleets of automated vehicles and no doubt gather substantial data around crash reports are loathe to share it with insurers out of fears of giving away proprietary information and losing their competitive edge. The consequence of this lack of open exchange is that AVs continue to remain expensive and perhaps improperly priced from a risk standpoint.

There are some new attempts to work around this problem. Swiss Re, for example, is developing a global ADAS risk score that encourages car manufacturers to share data with them that they in turn would use to recommend discounts to insurers. Continental AG has similarly developed a Data Monetization Platform that seemingly allows fleet operators to sell data in a secure and transparent manner to city authorities, insurers and other interested parties. These are early days so whether these initiatives will be able to overcome the insecurities around trade secrets and proprietary data remains to be seen.

It is however clear that along with the evolution of cars and technologies the insurance industry too will need to change. As a recent Harvard Business Review article points out, automated vehicles will fundamentally alter the private car insurance market by shifting car ownership from an individual-centric model to a fleet-centric one, at least in the short to medium term. This shift itself could cost auto insurers nearly $25 billion (or 1/8th of the global market) in revenue from premiums. It is imperative therefore that the insurance industry devise new innovative approaches to price the risk associated with AVs. Hopefully they can do this without further driving up costs and while making safer technologies accessible to those that need it the most.

The delivery industry is evolving in order to keep up with the rise of home delivery. Arrival, a startup company in the process of building electric delivery vans, plans to add new vehicles to the roads in the next few years. The company plans to offer vehicles with different battery capacities, but the current model maxes out at 200 miles of range. Arrival’s vehicles are expected to carry 500 cubic feet of packages and up to two tons. In order to be competitive with the direction towards automation, Arrival is designing its vehicles to accommodate autonomous systems which will allow for a smooth transition once autonomous driving is more widely used. In the meantime, the vehicle’s Advanced Driver-Assistance Systems (ADAS) will increase safety and operating efficiencies.

Arrival has recently captured the interest of big corporations. Hyundai and Kia announced that they are investing around $110 million in Arrival and will jointly develop vehicles with them. UPS has been a partner of Arrival since 2016 and has both invested and ordered 10,000 of Arrival’s electric delivery vans. UPS was motivated to purchase these vehicles because of its efforts to cut emissions and delivery costs, both of which Arrival contends its vehicles will do. UPS plans to begin using some of these vehicles later this year.

The Arrival vans along with UPS’s Waymo project “will help us continue to push the envelope on technology and new delivery models that can complement the way our drivers work,” said Juan Perez, chief information and engineering officer at UPS.

Arrival sets itself apart from other electric delivery vehicle companies in a few ways. One is its plan to establish “microfactories” that take up 10,000 square meters and make around 10,000 vehicles a year for nearby customers. The use of microfactories instead of a large plant will significantly cut the costs of manufacturing. Another unique aspect of Arrival is its modular approach to production in which the vehicle’s weight, type, size, and shape can be customized according to the purchaser’s preference.

The environmental aspect of using electric vehicles over gas or diesel vehicles is a major component that will contribute to Arrival’s current and expected success. A report by the World Economic Forum revealed that deliveries will increase carbon emissions by 30% by 2030 unless there is effective intervention. One of the intervention options that will have the greatest impact on reducing CO2 emissions is switching to battery electric vehicles. According to the report, battery electric vehicles can reduce CO2 emissions by 16%. UPS currently has about 123,000 delivery vehicles in its fleet. If all goes well with the electrical vehicles it purchased then the vehicles currently in UPS’s use might be phased out which is the sort of intervention our environment needs.

“As mega-trends like population growth, urban migration, and e-commerce continue to accelerate, we recognize the need to work with partners around the world to solve both road congestion and pollution challenges for our customers and the communities we serve. Electric vehicles form a cornerstone to our sustainable urban delivery strategies. Taking an active investment role in Arrival enables UPS to collaborate on the design and production of the world’s most advanced electric delivery vehicles.”

Juan Perez of UPS

I recently wrote about a renewed federal push to regulate automated vehicles. I’ve previously highlighted a range of state regulatory schemes, including California’s relatively strict set of regulations. Meanwhile, the advent of truly automated vehicles, which seemed imminent when Waymo announced its driverless shuttle service in Phoenix, now may be farther away than we expected. Waymo’s shuttle’s still have human safety drivers, and the technology has not advanced as quickly as expected to handle all the vagaries of the road.

But as Congress and the states struggle to get a regulatory handle on this new technology, a recent Tesla update raises an important question. Is the regulatory state agile enough to adapt when the automated vehicle market evolves in unexpected ways?

Last week, Tesla unveiled “Smart Summon,” a feature in some vehicles that allows the user to summon the car to their location. With a range of 200 feet, Smart Summon is primarily designed for use in parking lots. At least one video shows its successful use in a Costco parking lot, avoiding pedestrians and other vehicles to meet its owner at the storefront. However, the feature is not limited to use in parking lots, and videos have emerged of users trying out Smart Summon on public roads, and even running in front of their vehicle to try and make the car chase them. Despite the potential dangers this technology presents, no one has yet been injured or hit by a driverless Tesla being summoned.

Despite the seriousness with which California takes automated vehicle regulation, state authorities have determined that Teslas equipped with Smart Summon are not autonomous, and thus do not need to meet California’s AV standards. Based on regulatory definitions, this is probably the correct. A state DMV spokesperson said the state defines AVs as vehicles able to drive without active physical control or monitoring by a human. Smart Summon requires a user to be attentive to their smartphone. Furthermore, its inability to operate more than 200 feet from a human controller means that it would not satisfy SAE autonomous driving level four or five.

Despite not being a true AV though, it’s clear that Smart Summon presents many of the same dangers as one. It operates in unpredictable parking lots, filled with pedestrians and vehicles and shopping carts all moving around each other in unpredictable ways. It is the sort of environment that can get dicey for a human driver, with our experience and understanding of the subtle signals humans give off to make an otherwise unexpected move a little bit more predictable. And despite a small-print company warning that Smart Summon requires “active driver supervision,” the amount of supervision anyone can truly give a moving vehicle from 200 feet away is certainly questionable.

And yet, these vehicle are not AVs. Instead, they seem to fall within an increasingly muddled gray area of transportation that is something short of fully automated, but requires much less than full and active driver attention. In California’s current regulatory environment, this technology fits neatly into a gap.

A year ago, many people assumed we were rapidly approaching the rise of Level 4 automated vehicles that could operate smoothly on city streets. Regulations developed at the time are written with that sort of technology in mind. Even one year out, legislators were not thinking of how to assure the safety of something like Smart Summon.

So how should something like Smart Summon be regulated? What will autonomous—or semi-autonomous—technology look like a year from now, and how can government agencies prepare for it? Given the unpredictable nature of an early stage technology, regulators will continue struggling to answer these questions.

The European Parliament, the deliberative institution of the European Union which also acts as a legislator in certain circumstances, approved on February 20, 2019 the European Commission’s proposal for a new Regulation on motor vehicle safety. The proposal is now set to move to the next step of the EU legislative process; once enacted, an EU Regulation is directly applicable in the law of the 28 (soon to be 27) member states.

This regulation is noteworthy as it means to pave the way for Level 3 and Level 4 vehicles, by obligating car makers to integrate certain “advanced safety features” in their new cars, such as driver attention warnings, emergency braking and a lane-departure warning system. If many of us are familiar with such features which are already found in many recent cars, one may wonder how this would facilitate the deployment of Level 3 or even Level 4 cars. The intention of the European legislator is not outright obvious, but a more careful reading of the legislative proposal reveals that the aim goes much beyond the safety features themselves: “mandating advanced safety features for vehicles . . .  will help the drivers to gradually get accustomed to the new features and will enhance public trust and acceptance in the transition toward autonomous driving.” Looking further at the proposal reveals that another concern is the changing mobility landscape in general, with “more cyclists and pedestrians [and] an aging society.” Against this backdrop, there is a perceived need for legislation, as road safety metrics have at best stalled, and are even on the decline in certain parts of Europe.

In addition, Advanced Emergency Braking (AEB) systems have been trending at the transnational level, in these early months on 2019. The World Forum for Harmonization of Vehicle Regulations (known as WP.29) has recently put forward a draft resolution on such systems, in view of standardizing them and making them mandatory for the WP.29 members, which includes most Eurasian countries, along with a handful of Asia-Pacific and African countries. While the World Forum is hosted by the United Nations Economic Commission for Europe (UNECE,) a regional commission of the Economic and Social Council (ECOSOC) of the UN, it notably does not include among its members certain UNECE member states such as the United States or Canada, which have so far refused to partake in World Forum. To be sure, the North American absence (along with that of China and India, for example) is not new; they have never partaken in the World Forum’s work since it started its operations in 1958. If the small yellow front corner lights one sees on US cars is not something you will ever see on any car circulating on the roads of a W.29 member state, one may wonder if the level of complexity involved in designing CAV systems will not forcibly push OEMs toward harmonization; it is one thing to live with having to manufacture different types of traffic lights, and it is another one to design and manufacture different CAV systems for different parts of the world.

Yet it is well known that certain North American regulators are not a big fan of such approach. In 2016, the US DoT proudly announced an industry commitment of almost all car makers to implement AEB systems in their cars, with the only requirement that such systems satisfy set safety objectives. If it seems like everyone would agree that limited aims are sometimes the best way to get closer to the ultimate, bigger goal, the regulating style varies. In the end, one must face the fact that by 2020, AEB systems will be harmonized for a substantial part of the global car market, and maybe, will be so in a de facto manner even in North America. And given that the World Forum has received a received a clear mandate from the EU – renewed as recently as May 2018 – to develop a global and comprehensive CAV standard, North American and other Asian governments who have so far declined to join the W.29 might only lose an opportunity to influence the outcome of such CAV standards by sticking to their guns.

Guest Blog by Jesse Halfon

Last month, two California Highway Patrol (CHP) officers made news following an arrest for drunk driving. What made the arrest unusual was that the officers initially observed the driver asleep behind the wheel while the car, a Tesla Model S, drove 70 mph on Autopilot, the vehicle’s semi-automated driving system.

Much of the media coverage about the incident revolved around the CHP maneuver to safely bring the vehicle to a stop. The officers were able to manipulate Tesla Autopilot to slow down and ultimately stop mid-highway using two patrol vehicles, one in front and one behind the ‘driverless’ car.

But USC Law Professor Orin Kerr mused online about a constitutional quandary relating to the stop, asking, “At what point is a driver asleep in an electric car that is on autopilot “seized” by the police slowing down and stopping the car by getting in front of it?” This question centered around when a person asleep was seized,a reasonable 4th Amendment inquiry given the U.S. Supreme Court standard that a seizure occurs when a reasonable person would not have felt ‘free to leave’ or otherwise terminate the encounter with law enforcement.[1] 

Kerr’s issue was largely hypothetical given that the police in this situation unquestionably had the legal right to stop the vehicle (and thereby seize the driver) based on public safety concerns alone.

However, a larger 4th Amendment question regarding semi-automated vehicles looms. Namely, what constitutes’reasonable suspicion’ to stop the driver of a vehicle on Autopilot for a traditional traffic violation like ‘reckless driving’ or ‘careless driving’?[2] Though there are no current laws that prescribe the safe operation of a semi-autonomous vehicle, many common traffic offenses are implicated by the use of automated driving features.

Some ‘automated’ traffic violations will be unchanged from the perspective of law enforcement. For example, if a vehicle on Autopilot[3] fails to properly stay inits lane, the officer can assess the vehicle’s behavior objectively and ticket the driver who is ultimately responsible for safe operation of the automobile.Other specific traffic violations will also be clear-cut. New York, for example still requires by statute that a driver keep at least one hand on the wheel.[4] Many states ban texting while driving, which though often ambiguous, allows for more obvious visual cues for an officer to assess.

However, other traffic violations like reckless driving[5] will be more difficult to assess in the context of semi-automated driving.

YouTube is filled with videos of people playing cards, dancing, and doing various other non-driving activities in their Teslas while Autopilot is activated. While most of these videos are performative, real-world scenarios are commonplace. Indeed, for many consumers, the entire point of having a semi-autonomous driving system is to enable safe multi-tasking while behind the wheel.

Take for example, the Tesla driver who is seen biting into a cheeseburger with both hands on the sandwich (and no hands on the wheel). Is this sufficient for an officer to stop a driver for careless driving?Or what about a driver writing a note on a piece of paper in the center console while talking on the phone. If during this activity, the driver’s eyes are off the road for 3-4 seconds, is there reasonable suspicion of ‘reckless driving’that would justify a stop? 5-6 seconds? 10? 20?

In these types of cases, the driver may argue that they were safely monitoring their semi-automated vehicle within the appropriate technological parameters. If a vehicle is maintaining a safe speed and lane-keeping on a low traffic highway, drivers will protest – how can they be judged as ‘careless’ or ‘reckless’ for light multi-tasking or brief recreation while the car drives itself?

The 4th Amendment calculus will be especially complicated for officers given that they will be unable to determine from their vantage point whether a semi-autonomous system is even activated. Autopilot is an optional upgrade for Tesla vehicles and vehicles that are equipped with L2/L3 systems will often be driven inattentively without the ‘driverless’ feature enabled. Moreover, most vehicles driven today don’t even have advanced automated driving features.

A Tesla driver whose hands are off the steering wheel could be safely multi-tasking using Autopilot. But they could also be steering with their legs or not at all. This leaves the officer, tasked with monitoring safe driving for public protection, in a difficult situation. It also leaves drivers, who take advantage of semi-automated systems, vulnerable to traffic stops that are arguably unnecessary and burdensome.

Of course, a driver may succeed in convincing a patrol office not to issue a ticket by explaining their carefully considered use of the semi-automated vehicle. Or the driver could have a ‘careless driving’ ticket dismissed in court using the rational of safely using the technology. But once a police-citizen interaction is initiated, the stakes are high.

Designing a semi-automated vehicle that defines the parameters of safe driving is complex. Crafting constitutional jurisprudence that defines the parameters police behavior may be even more complex. Hopefully the Courts are up to the task of navigating this challenging legal terrain.

Jesse Halfon is an attorney in Dykema’s Automotive and Products Liability practice group and a member of its Mobility and Advanced Transportation Team.

[1] United States v. Mendenhall, 446 U.S. 544, 554 (1980); United States v. Drayton, 536 U.S. 194, 202 (2002);Florida v. Bostick, 501 U.S. 429, 435-36 (1991).

[2] Some traffic violations are misdemeanors or felonies. To make an arrest in public for a misdemeanor, an officer needs probable cause and the crime must have occurred in the officer’s presence.  For a Terry stop involving a traffic misdemeanor, only reasonable suspicion is required.

[3] Tesla Autopilot is one of several semi-automated systems currently on the market. Others,including Cadillac Super Cruise Mercedes-Benz Drive Pilot and Volvo’s Pilot Assist offer comparable capabilities.

[4] New York Vehicle and Traffic Law § 1226.

[5] Most states have a criminal offense for reckless driving. Michigan’s statute is representative and defines reckless driving as the operation of a vehicle “in willful or wanton disregard for the safety of persons or property”.  See Michigan Motor Vehicle Code § 257.626. Michigan also has a civil infraction for careless driving that is violated when a vehicle is operated in a ‘careless or negligent manner’. See Michigan Motor Vehicle Code § 257.626b

Tesla’s enthusiastic marketing of its Autopilot feature may be landing the company in legal hot water. Last week, a Florida man sued the car manufacturer after his Model S crashed into a stalled vehicle at high speed. The driver, who allegedly suffered spinal and brain injuries, claims that Tesla’s “purposefully manipulative sales pitch” had duped him and other Tesla owners into the mistaken belief that their vehicles can travel on the highway almost without supervision. The outcome of the case may carry key lessons not only for Tesla, but for all automakers as they develop more autonomous features.

This isn’t the first time Tesla has faced legal challenges related to the Autopilot feature. In May, the company paid $5 million to settle a class action suit claiming its Autopilot 2.0 upgrade was unusable and dangerous. This case, while currently only involving one plaintiff, could have even broader ramifications. The plaintiff’s products liability suit claims that the company has systematically duped consumers through a “pervasive national marketing campaign.” If successful, this suit could open the door to recovery for others who crash while using Autopilot.

While Tesla has typically been more grandiose in their advertising techniques than more traditional automakers, their legal challenges do serve to highlight the struggles that auto manufacturers will face in the coming years. This year alone, Ford has packaged its driver assist features into a system called Co-Pilot 360 and GM has called its Super Cruise system “the world’s first true hands-free driver assistance feature for the freeway.” In the near future, other car manufacturers are expected to join these companies in developing ever more autonomous features.

As the auto industry collectively drives toward the creation of truly autonomous vehicles, there will be an understandable temptation to hype up every new technological feature. Arguably, many of these features will increase auto safety when used properly. Certainly, road testing such features is a key step on the path towards fully driverless cars. The challenges facing Tesla should serve as a warning though. Companies need to be cautious in describing their driver-assist technologies, and ensuring that customers understand the limits of such new features. Doing so will have the dual benefit of reminding drivers that they should still be in control of the vehicle, and shielding themselves from the type of liability Tesla faces today.