April 2021


By Matthew Wansley*


Human drivers are a menace to public health. In 2019, 36,096 Americans were killed in motor vehicle crashes, and an estimated 2.74 million Americans were injured. Most crashes aren’t “accidents.” The National Highway Traffic Safety Administration estimates that driver error is the critical reason for 94% of crashes. The deployment of autonomous vehicles (AVs)—likely in the form of robotaxis—will make transportation safer. AVs will cause fewer crashes than conventional vehicles (CVs) because software won’t make the kinds of errors that drivers make. AVs won’t drive drunk, get drowsy, or be distracted. They won’t speed, run red lights, or follow other vehicles too closely. They will drive cautiously and patiently. AVs will consistently drive like the safest human drivers drive at their best.

But AVs could be even safer, I argue in a forthcoming article, The End of Accidents (forthcoming in the U.C. Davis Law Review). AVs could be designed not only to avoid causing their own errors, but also to reduce the consequences of errors by human drivers, cyclists, and pedestrians. AVs can monitor their surroundings better and react more quickly than human drivers. AV technology has the potential to make better predictions and better decisions than humans can. AVs could be designed to anticipate when other road users will drive, bike, or walk unsafely and to prevent those errors from leading to crashes or make unavoidable crashes less severe. As long as AVs share the roads with humans, improving AV technology’s capability to mitigate the consequences of human error will save lives.

Liability rules will influence how much AV companies invest in developing safer technology. Existing products liability law creates insufficient incentives for safety because AV companies can reduce their liability for a crash by showing that the plaintiff was comparatively negligent. A comparative negligence defense will be a powerful liability shield because the kinds of errors that human drivers make—violating traffic laws and driving impaired—are the kinds of errors that human jurors recognize as negligence. A liability regime with a comparative negligence defense only creates incentives for AV companies to develop behaviors that AV technology has already mastered: driving at the speed limit, observing traffic signals, and maintaining a safe following distance. It won’t push AV companies to develop software that can reliably anticipate human error and take evasive action.

Data from real-world AV testing shows that AVs are rarely causing crashes but are failing to avoid plausibly preventable crashes. In October 2020, the leading AV company, Waymo, released a report of every contact between its prototype robotaxis and other vehicles, bicycles, or pedestrians during its 6.1 million miles of autonomous driving in 2019. At the current stage of testing, Waymo’s AVs usually have a backup driver behind the wheel, ready to take over manual control if necessary. Waymo’s report includes every actual contact during autonomous operation and every contact that would have happened, according to Waymo’s simulation software, if the backup driver hadn’t taken over manual control. If the report is reliable, almost every contact in the 6.1 million miles involved human error. In fact, in most cases, it’s not even arguable that the AV made an error that contributed to the contact.

Waymo’s report also reveals, however, that its AVs sometimes fail to avoid plausibly preventable crashes caused by human error. Consider the scenario depicted below. Late one night in 2019, a Waymo AV was travelling in the left lane of a divided road in the suburbs of Phoenix. A CV was traveling in the wrong direction in the right lane veering towards the AV. The backup driver took over manual control. According to Waymo’s simulation, if the backup driver hadn’t taken over, the CV would have crashed head-on into the AV. The force of the collision would have caused the AV’s airbag to deploy. The AV would have braked, but not swerved out of the way. The CV’s driver likely wouldn’t have swerved out of the way either because the driver was likely “significantly impaired or fatigued.”

Head-On Collision from Waymo Report

Waymo doesn’t clarify whether its backup driver was able to avoid a crash. But it’s quite possible that the backup driver was able to avoid it. Evolution has armed humans with a powerful survival instinct. The backup driver should have had room for evasive maneuvers on the wide suburban road late at night. Yet Waymo’s AV software—software that would drive 6.1 million miles without causing a crash that same year—wouldn’t have prevented an apparently preventable head-on collision.

Consider the simulated crash from a liability perspective. Suppose there had been no backup driver, and the vehicles collided. Assume, consistent with the report, that the driver of the CV was drunk. Would the drunk driver prevail against Waymo in a lawsuit? Almost certainly not. The question itself sounds absurd. Drunk driving that results in a crash is negligence per se. Waymo’s comparative negligence defense would dispose of the case. Because Waymo would avoid liability for the crash, it would have little incentive to develop technology that could prevent a similar crash in the future.

Now consider the same simulated crash from a social welfare perspective. Would the social benefits of technology that could prevent a crash like this exceed the cost of development? Likely yes. Drunk driving is common. Drivers, impaired or not, sometimes drive in the wrong direction. AV technology’s ability to monitor the environment more consistently and react more quickly gives AVs advantages over CVs in responding to impaired drivers. If AV companies invest in developing better behavior prediction and decision-making capabilities, they could design AVs that would dramatically reduce the social costs of drunk driving. AVs could become superhuman defensive drivers, preventing not only crashes like this one but also crashes that now seem unpreventable.

Investments in developing safer AV software will be highly cost-effective because the software will be deployed at scale. When an AV company develops code that enables its AVs to prevent a crash in a certain kind of traffic scenario—and doesn’t make them less safe in other scenarios—it will add the new code to the software that runs on all the AVs in its fleet. The improved code will prevent a crash every time one of the company’s AVs encounters a similar traffic scenario for the rest of history. As engineers change jobs or share ideas, the fix will spill over to other AV companies’ fleets. From a social welfare perspective, the return on investments in developing safer AV technology will be tremendous.

AV companies will only develop AV technology’s full crash prevention potential if they internalize the costs of all preventable crashes. But determining which crashes could be efficiently prevented with yet-to-be developed AV technology would be exceedingly difficult for jurors, judges or regulators. AV technology may achieve safety gains not just by mimicking the behavior of an expert human driver but by exhibiting emergent behavior—behavior that would seem alien to a human observer. The better approach is to treat all crashes involving AVs as potentially preventable. In The End of Accidents, I defend a system of absolute liability for AV companies that I call “contact responsibility.” Under contact responsibility, AV companies would pay for the costs of all crashes in which their AVs come into contact with other vehicles, persons, or property unless they could show that the party seeking payment intentionally caused the crash. No crash involving an AV would be considered an accident.[1]

Contact responsibility would align the private financial incentives of AV companies more closely with public safety. AV companies will collect massive amounts of data on driver, cyclist, and pedestrian behavior as their fleets of AVs passively record their surroundings. Contact responsibility will push AV companies to sift through that data to find opportunities to prevent crashes efficiently. In many cases, the solution will be developing safer technology. If a company’s AVs are frequently being hit in intersections by CVs that run red lights, the company might develop software that can more reliably predict when CVs won’t stop at traffic signals. In other cases, the solution may be deploying AVs differently. The company might plan routes for its robotaxis that avoid especially dangerous roads at certain times of day. In still other cases, the solution may be political. The company might use its money to lobby for protected bike lanes, mandatory ignition interlocks, or the development of a vehicle-to-vehicle communication network.

Contact responsibility might sound radical because it would insulate human drivers from tort liability for crashes they cause negligently or even recklessly. One might worry that this would create a moral hazard risk. But liability plays at most a modest role in deterring unsafe driving. Human drivers tend to cause crashes by breaking traffic laws and driving impaired. Under contact responsibility, the civil and criminal penalties for those violations would continue to provide deterrence. Drivers would also still face liability for crashes with other CVs, cyclists, and pedestrians. They would still face the possibility that their insurers would raise their premiums after a crash with an AV, even though they weren’t held liable, because the crash indicated they had a higher risk of crashing with a CV. Most importantly, drivers would still want to avoid the risk of injuring themselves or others. Contact responsibility wouldn’t diminish those deterrents. It would simply target liability incentives where they will be most useful: AV companies’ investment decisions.

In recent years, several scholars have proposed reforms to adapt tort law to crashes involving AVs.[2] The debate has yielded valuable insights, but it has been conducted almost entirely from the armchair. Now that data on AV safety performance is publicly available, it’s possible to make more informed predictions about the real-world consequences of different liability rules. The data suggest that AV crashes will follow a predictable pattern. AVs will rarely cause crashes. But they will fail to avoid plausibly preventable crashes caused by other road users. Therefore, it’s critical for liability reform to address whether AV companies will be responsible when a negligent or reckless human driver causes a crash with an AV. Scholars who have considered the issue of comparative negligence have advocated retaining some form of the defense.[3] In fact, the leading reform proposal expressly rejects AV company responsibility for “injury caused by the egregious negligence of a CV driver, coupled with minimal causal involvement by the [AV].”[4] I argue that absolving AV companies from responsibility for those injuries would be a mistake. Contact responsibility is the only liability regime that will unlock AV technology’s full crash prevention potential.


[1] For crashes between AVs, I endorse Steven Shavell’s “strict liability to the state” proposal. See Steven Shavell, On the Redesign of Accident Liability for the World of Autonomous Vehicles 2 (Harvard Law Sch. John M. Olin Ctr., Discussion Paper No. 1014, 2019), http://www.law.harvard.edu/programs/olin_center/papers/pdf/Shavell_1014.pdf.

[2] See generally Kenneth S. Abraham & Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127 (2019); Mark A. Geistfeld, A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation, 105 Calif. L. Rev. 1611 (2017); Kyle D. Logue, The Deterrence Case for Comprehensive Automaker Enterprise Liability, 2019 J. L. & Mobility 1; Bryant Walker Smith, Automated Driving and Product Liability, 2017 Mich. St. L. Rev. 1;David C. Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (2014).

[3] See, e.g., Mark A. Lemley & Bryan Casey, Remedies for Robots, 86 U. Chi. L. Rev. 1311, 1383 (2019).

[4] Abraham & Rabin, supra note 2, at 167.


* Matthew Wansley researches venture capital law and risk regulation as an Assistant Professor of Law at the Benjamin N. Cardozo School of Law. Prior to joining the Cardozo faculty, he was the General Counsel of nuTonomy Inc., an autonomous vehicle startup, and a Climenko Fellow and Lecturer on Law at Harvard Law School. He clerked for the Hon. Scott Matheson on the U.S. Court of Appeals for the Tenth Circuit and the Hon. Edgardo Ramos on the U.S. District Court for the Southern District of New York.

Earlier this month, two Texas men died when the Tesla Model S they were traveling in crashed into a tree. However, just what led to the crash remains a point of contention between authorities and Tesla itself. The police have said that one passenger was found in the front passenger-side seat, while the other was in the back – meaning at the time of the accident, there was no one in the driver’s seat. That would indicate that the vehicle’s “Autopilot” advanced safety system was active, though last week Tesla CEO Elon Musk claimed that the company had data indicated the system was not in use at the time of the accident. The investigation is ongoing, and local police have said they will subpoena Tesla to obtain the vehicle data in question.

If the Texas case does turn out to have involved the Autopilot feature, it will be far from the first. In May of 2016, a Florida driver was killed when his Tesla, in Autopilot mode, crashed into the side of a semi-truck. The National Highway Traffic Safety Administration (NHTSA) investigation into that incident found no evidence of defects in the Tesla system, placing responsibility primarily on the driver, who wasn’t paying attention to the road while the vehicle operated under Autopilot. The National Transportation Safety Board (NTSB), on the other hand, put more of the blame on Tesla for allowing the driver to misuse the Autopilot features – i.e. that the system didn’t disengage when being used outside of its recommended limits. Then Secretary of Transportation Anthony Foxx echoed this when he made a point to say that automakers like Tesla had a responsibility to insure consumers understand the limits of driver assistance systems. In March 2018, a California man was killed when his Tesla Model X SUV crashed into a highway safety barrier, leading to a NTSB investigation and a lawsuit from the driver’s family. A third driver died in a 2019 accident while Autopilot was enabled, this time again in Florida.

At issue here is not only the safety of the Autopilot technology, but also the way it has been marketed, and the willingness of drivers to push the system beyond its capabilities. At its core, Autopilot is an advanced driver-assistance system (ADAS), meaning it can take over a number of driving tasks and help protect drivers, but a human is supposed to remain focused on the driving task. Over the years Tesla has upgraded their vehicle’s software to recognize things like stoplights and stop signs, starting with beta tests and then making their way into every Tesla on the road that is capable of supporting the update (though there have been humorous issues with these roll outs – like vehicles confusing Burger King signs for stop signs). In late 2020, Tesla rolled out a “Full Self-Driving” update to select vehicles, which expanded autopilot’s operational domain to local streets (previously it was only useable on highways).

The NTSB has taken Tesla to task over Autopilot not only for the aforementioned 2016 crash, but also for a 2018 crash were a Tesla ran into the back of a stopped fire truck (no one was hurt). In that incident, Autopilot had been engaged for 13 minutes before the crash, and the driver had been distracted by their breakfast coffee and bagel. In its investigation of the 2019 Florida crash, the NTSB again cited Tesla’s failure to ensure Autopilot couldn’t be used in situations outside of its designed domain, and pointed to NHTSA’s failure to generate clear safety rules for ADAS technologies. In other cases Autopilot has continued to operate while a driver sleeps, or was passed out due to drinking (requiring police officers to use their cars to force the vehicle to a stop).

What remains in question is the ability of Tesla vehicles to monitor human drivers and keep them engaged in the driving process. A recent Consumer Reports test illustrates how easy it can be to trick the existing monitoring system and even allow a driver to slip into the passenger seat while in motion. Tesla’s system for testing driver interaction is via the steering wheel, while some other automaker systems, like GM’s Super Cruise, use more direct observation via eye tracking cameras.  It’s clear there is an issue with Autopilot that needs further investigation, but what have governments done in reaction to these issues, beyond the NTSB reports we noted? And what issues are raised by the way Tesla has marketed Autopilot to consumers? I’ll explore both of those issues in my next post.  

Last week, Claire wrote about how Fourth Amendment precedents and facial recognition technologies could allow law enforcement to use AVs and other camera-equipped transportation technologies as a means of surveillance. In that post she mentioned the case of Robert Julian-Borchak Williams, who last year was arrested by the Detroit Police Department based on faulty facial recognition evidence. The same day Claire’s post went up, law students from Michigan Law’s Civil Rights Litigation Initiative, along with the Michigan ACLU, sued the City of Detroit in Federal court for false arrest and imprisonment in violation of Mr. Williams’ rights under the US Constitution and the Constitution of the State of Michigan.

Given the growing use of facial recognition technology by law enforcement (including in the pursuit of the January 6th insurrectionists) cases of misidentification and wrongful arrests like Mr. Williams’ will no doubt continue to occur. Indeed, there is longstanding concern about facial recognition systems misidentifying people of color – due in large part to their designer’s failure to use diverse datasets (i.e. diverse faces) in the training data used to teach the system how to recognize faces. Beyond the digital era camera technology itself has built in biases, as it was long calibrated to better capture white skin tones. As cameras become more ubiquitous in our vehicles (including cameras monitoring the driver) issues of facial recognition will continue to collide with the emerging transportation technologies we regularly discuss here.

With all of that in mind, let’s turn to a recent case in Massachusetts that gives us a good example of how vehicle camera data can be used in a criminal investigation. On December 28, 2020, Martin Luther King, Jr. Presbyterian Church, a predominantly Black church in Springfield, MA, was destroyed by arson. Last week, the U.S. Department of Justice brought charges against a 44 year old Maine man, Dushko Vulchev, for the destruction of the church. Just how was the FBI able to identify Mr. Vulchev as a suspect, you ask? Thanks to video footage from a Tesla vehicle parked near the church on the night of the fire. When Mr. Vulchev damaged (and later stole) the Tesla’s tires, the vehicle used its onboard cameras to record him in clean, clear footage (you can see the photos in this Gizmodo post on the case). Tesla vehicles are equipped with a number of cameras and a feature called “Sentry Mode,” which remains turned on even when the vehicle is parked and otherwise inactive. If the vehicle is damaged, or a “severe threat” is detected, the car alarm will activate and the vehicle’s owner will be able to download video of the incident beginning 10 minutes before the threat was detected. In this case, this video footage was instrumental in identifying Mr. Vulchev and placing him near the church on the night of the fire.

While the FBI didn’t use facial recognition software in this case (as far as we know), it still illustrates how the quantity and quality of vehicle generated material will continue to be of interest in future investigations. How long before law enforcement proactively seeks video footage from any vehicle near a crime scene, even if that vehicle was otherwise uninvolved? If more OEM’s turn to Tesla’s camera-based security features, could we face a feature where every car on the block becomes a potential “witness?” Further, what happens when the data they produce is fed into faulty facial recognition software like the one that misidentified Mr. Williams? We live in an era of ever-more recording and our vehicles may soon be just another device watching our every move, whether are aware of it or not.  

Brave New Road: The Role of Technology in Achieving Safe and Just Transport Systems

Expert Participants

Tuesday, March 23

Emerging Transportation Technologies, a Primer

A write-up of this panel is available here

Moderator:

Emily Frascaroli, Managing Counsel, Product Litigation Group, Ford Motor Company (US)

Emily Frascaroli is managing counsel of the Product Litigation Group at Ford Motor Company, including the product litigation, asbestos, and discovery teams. She also advises globally on automotive safety, regulatory, and product liability issues, including a focus on autonomous vehicles and mobility. She has extensive experience handling complex product litigation cases, regulatory matters with the National Highway Traffic Safety Administration and other governmental entities, and product defect investigations. She also is co-chair of the Legal and Insurance Working Group for the University of Michigan’s Mcity. In 2017, she was appointed by Gov. Rick Snyder to the Michigan Council on Future Mobility, and in 2019 she was appointed by Ohio Gov. John Kasich to the DriveOhio Expert Advisory Board.

Professor Frascaroli earned her JD, cum laude, from Wayne State University and was an editor of the Wayne Law Review. She received her BS in aerospace engineering from the University of Southern California and her MEng in aerospace engineering from the University of Michigan. Prior to practicing law, she worked in engineering at both Ford and NASA.

Expert Participants:

Jennifer A. Dukarski, Shareholder, Butzel Long

Jennifer A. Dukarski is a Shareholder based in Butzel Long’s Ann Arbor office, practicing in the areas of intellectual property, media, and technology. She focuses her practice at the intersection of technology and communications with an emphasis on the legal issues arising from emerging and disruptive innovation: digital media and content, vehicle safety, connected and autonomous cars, shared mobility, infotainment, data privacy, and security. Jennifer leads clients in securing and protecting rights in technology through transactions and litigation.  Jennifer was named one of the 30 Women Defining the Future of Technology in January 2020 by Warner Communications for her innovative thoughts and contributions to the tech industry.  She is a Certified Information Privacy Professional concentrating on the U.S. Private Sector privacy and data protection law (CIPP/US). 

Nira Pandya, Associate, Covington & Burling LLP

Nira Pandya is an associate specializing in corporate and technology transactions.  Ms. Pandya developed a keen interest in connected and automated vehicles (CAVs) during law school, where she researched issues of automation and labor.  As a member of the Transportation Research Board’s Standing Committee on Emerging Technology Law, she has assisted in the planning of the annual Automated Vehicle Symposium over the last four years. 

At Covington, Ms. Pandya continues to grow her expertise in this area by tracking and analyzing legislative and regulatory developments with respect to CAVs.  As part of this work, she has published several blog posts and client alerts on this topic.  Ms. Pandya leverages her passion and knowledge in this space to deliver exceptional service to clients in the Internet of Things (IoT) and technology sectors. 

Ms. Pandya has always been passionate about mentorship as a tool for career development, with a focus on women of color and first generation professionals.  In her free time, you’ll find her sipping pour-overs from a local coffee roaster or practicing classical singing.

Bryant Walker Smith, Co-Director of Law and Mobility Program, Associate Professor of Law, University of South Carolina Law School

Bryant Walker Smith is an associate professor in the School of Law and (by courtesy) the School of Engineering at the University of South Carolina. He also is an affiliate scholar at the Center for Internet and Society at Stanford Law School and co-director of the University of Michigan Project on Law and Mobility. He previously led the Emerging Technology Law Committee of the Transportation Research Board of the National Academies and served on the U.S. Department of Transportation’s Advisory Committee on Automation in Transportation.

Trained as a lawyer and an engineer, Smith advises cities, states, countries, and the United Nations on emerging transport technologies. He co-authored the globally influential levels of driving automation, drafted the leading model law for automated driving in the United States, and taught the first legal courses dedicated to automated driving (in 2012), hyperloops, and flying taxis. His students have developed best practices for regulating scooters, and he is writing about what it means to be a trustworthy company. His publications are available at newlypossible.org.

Before joining the University of South Carolina, Smith led the legal aspects of the automated driving program at Stanford University, clerked for The Hon. Evan J. Wallach at the U.S. Court of International Trade, and worked as a fellow at the European Bank for Reconstruction and Development. He holds both an LLM in international legal studies and a JD (cum laude) from the New York University School of Law and a BS in civil engineering from the University of Wisconsin. Prior to his legal career, Smith worked as a transportation engineer.

Wednesday, March 24

A Conversation with Paul C. Ajegba, Director, Michigan Department of Transportation

Paul C. Ajegba, P.E., Director, Michigan Department of Transportation

Paul C. Ajegba has over 30 years of experience with the Michigan Department of Transportation, and was after 28 years with the department, he was appointed by Governor Gretchen Whitmer as Director on Jan. 1, 2019.  He previously served MDOT for three months as Metro Region Engineer, and before that as University Region Engineer.  During his seven years in the University Region, Ajegba oversaw his team’s involvement in the planning, design and construction of several major projects, including the US-23 Flex Route – a project nominated for the America’s Transportation Award, landing among the top 12 national finalists. Other notable projects include the I-94 rehabilitation project in Ann Arbor/Jackson, the I-96/US-23 interchange, and the I-75 freeway project.

Ajegba holds a Bachelor of Science in civil engineering from Prairie View A&M University and a Master’s Degree in construction engineering from the University of Michigan.  He is a licensed professional engineer in the State of Michigan.

Paul is a member of COMTO (Conference of Minority Transportation Officials), and serves on the following boards:   AASHTO, ITS America, M-City, University of Michigan College of Engineering, the Engineering Society of Detroit, and the Mackinac Bridge Authority.

Thursday, April 1

Transportation Equity and Emerging Technologies

A write-up of this panel is available here

Moderator:

C. Ndu Ozor, Associate General Counsel, University of Michigan

Ndu Ozor joined the University of Michigan Office of the Vice President and General Counsel in 2015.  As Associate General Counsel, Ndu advises his U-M clients on various business and transactional matters, primarily focusing on investments, acquisitions and divestitures, domestic and international transactions, partnerships, financing, automated vehicles, and general corporate governance.

Prior to joining the Office of the Vice President and General Counsel, Ndu was in the private equity group of Perkins Coie LLP, specializing in mergers and acquisitions and finance.  Ndu began his legal career in Chicago as an associate in the private equity group of Kirkland & Ellis LLP.  Ndu received his JD and BBA from the University of Michigan.

Expert Participants:

Robin Chase, Transportation Entrepreneur, Co-Founder of Zipcar

Robin Chase is a transportation entrepreneur. She is co-founder and former CEO of Zipcar, the world’s leading carsharing network; as well as co-founder of Veniam, a network company that moves terabytes of data between vehicles and the cloud. She has recently co-founded her first nonprofit, NUMO, a global alliance to channel the opportunities presented by new urban mobility technologies to build cities that are sustainable and just. Her recent book is Peers Inc: How People and Platforms are Inventing the Collaborative Economy and Reinventing Capitalism.

She sits on the Boards of the World Resources Institute and Tucows, and serves on the Dutch multinational DSM’s Sustainability Advisory Board. In the past, she served on the boards of Veniam and the Massachusetts Department of Transportation, the French National Digital Agency, the National Advisory Council for Innovation & Entrepreneurship for the US Department of Commerce, the Intelligent Transportations Systems Program Advisory Committee for the US Department of Transportation, the OECD’s International Transport Forum Advisory Board, the Massachusetts Governor’s Transportation Transition Working Group, and Boston Mayor’s Wireless Task Force.

Robin lectures widely, has been frequently featured in the major media, and has received many awards in the areas of innovation, design, and environment, including the prestigious Urban Land Institute’s Nichols Prize as Urban Visionary, Time 100 Most Influential People, Fast Company Fast 50 Innovators, and BusinessWeek Top 10 Designers. Robin graduated from Wellesley College and MIT’s Sloan School of Management, was a Harvard University Loeb Fellow, and received an honorary Doctorate of Design from the Illinois Institute of Technology.

Dr. David Rojas-Rueda, MD, MPH, PhD, Assistant Professor, Colorado State University

Dr. David Rojas-Rueda’s primary research focuses on promoting a healthy urban design, supporting mitigation, and adaptation to climate change. David is an environmental epidemiologist with over ten years of experience evaluating the health and equity impacts of urban and transport planning policies related to air pollution, traffic noise, green spaces, heat island effects, physical activity, and traffic accidents. He has worked in several countries around Europe, Africa, Latin, and North America. David specializes in health impact assessment, populational risk assessment, the burden of disease, and citizen science. His research actively involves citizens, stakeholders, local and national authorities. He has active collaborations with the World Bank and United Nations agencies, such as the World Health Organization (WHO), the Pan-American Health Organization (PAHO), and UN-Habitat.

Dr. Regan F. Patterson, PhD, Transportation Equity Research Fellow, Congressional Black Caucus Foundation

Dr. Regan F. Patterson is the Transportation Equity Research Fellow at the Congressional Black Caucus Foundation (CBCF), where she conducts intersectional transportation policy analysis and research. Prior to joining the CBCF, Dr. Patterson was a postdoctoral research fellow at the University of Michigan Institute for Social Research. She earned her PhD in Environmental Engineering at the University of California, Berkeley. Her dissertation research focused on the impact of transportation policies on air quality and environmental justice. Dr. Patterson holds a B.S. in Chemical Engineering from UCLA and an M.S. in Environmental Engineering from UC Berkeley.

Dr. Patterson’s most recent work – New Routes to Equity: The Future of Transportation in the Black Community – Can be found here

Tuesday, April 6

Justice, Safety, and Transportation Policy

Building on what we’ve learned from the first two weeks, participants will focus on examining how transportation policy is generated and how policymakers can take a more active role in how new technology is deployed and used. This includes policy issues like policing, street and city design, and their intersection with technological adoption.

Moderator:

Ellen Partridge, Policy & Strategy Director, Shared-Use Mobility Center 

Ellen brings to SUMC the expertise and knowledge of nearly 20 years of work in public transit administration and operations at both the federal and transit agency levels. She was appointed Chief Counsel for the USDOT Research and Innovative Technology Administration and also served as Deputy Assistant Secretary for Research and Technology and Chief Counsel for the FTA. She is intimately familiar with the legal and regulatory landscape of public transit, including the nuances of public agency partnerships with private mobility providers.
 
At the Chicago Transit Authority, she focused on policy initiatives – first as Deputy General Counsel for Policy and Appeals and then in the Strategic Operations unit that deployed new technology and trained supervisors on how to use it to improve bus service. Before joining the nation’s second-largest transit agency, she practiced environmental law with the firms of Jenner & Block in Chicago and Van Ness Feldman in Washington, D.C. She lived in the Republic of Palau, serving as counsel to its government as it transitioned from being a United Nations Trust Territory to independence.
 
While practicing law, she taught environmental and natural resources law as an adjunct professor at Northwestern University and DePaul University Law Schools. Ellen is a fellow with Leadership Greater Chicago, was awarded a fellowship with the German Marshall Fund and was a Senior Fellow with the Environmental Law and Policy Center. She earned her law degree at Georgetown University Law School and an MBA from the University of Chicago.
 

Expert Participants:

Justin Snowden, Mobility Expert, Former Chief of Mobility Strategy for the City of Detroit

Kelly Bartlett, Connected and Automated Vehicle Specialist, Michigan Department of Transportation

Kelly Bartlett is a Connected and Automated Vehicle Specialist for the Michigan Department of Transportation (MDOT).  He analyzes state and federal regulations and policies on automated vehicles, mobility and related topics.  He was very involved in the drafting of the 2016 Michigan legislation on automated vehicles.  He also assists the Michigan Council on Future Mobility and Electrification and the state’s Office on Future Mobility and Electrification as both consider and develop new policy recommendations.  In addition, Mr. Bartlett participates in national work groups on federal policies.  Prior to his current position, Mr. Bartlett was Senior Policy and Legislative Advisor for MDOT, and previously was a policy advisor in the Michigan Legislature.    

Kristin White, Connected and Automated Vehicles Executive Director, Minnesota Department of Transportation

Kristin White is Executive Director of Minnesota’s Office of Connected and Automated Vehicles (CAV-X), a public sector tech startup and idea incubator that researches and deploys transformational technology and policy. Kristin is a lawyer, policy strategist and innovator who brings empathy and leadership into the transportation sector, challenging us to harness revolutionary technologies and grow new partnerships to build tomorrow today. The CAV-X program is one of the leading CAV programs in the nation, with its projects, research and partnerships winning the National Cronin Award, WTS Innovator Award, and AASHTO Innovation Award.

Kristin has a B.A. from St. Olaf College, law degree from Hamline University School of Law and global arbitration certification from Queen Mary University of London. She began her career as a Fulbright Fellow with the US State Department and has since represented Fortune 500 companies, cities, and states.

Wednesday, April 7 

Challenging Algorithms in Court: A Conversation with Kevin De Liban

Kevin De Liban, Director of Advocacy at Legal Aid of Arkansas

Kevin De Liban is the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers’ rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas’s use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly. In addition, he and Legal Aid of Arkansas, along with the National Health Law Program and Southern Poverty Law Center, successfully challenged Medicaid work requirements in federal court, ending the state’s unlawful use of red-tape that stripped health insurance from over 18,000 people. Kevin regularly presents about imposing accountability on artificial intelligence and algorithms and was a featured speaker at the 2018 AI Now Symposium with other leading technologists, academics, and advocates. In 2019, Kevin received the Emerging Leader award from the national community of legal aid lawyers and public defenders. His work has appeared on or in the PBS Newshour, the Washington Post, Wall Street Journal, MSNBC, the Economist, the Verge, and other publications and podcasts. When not practicing law, Kevin is passionately creating music as a rapper.

In light of the 2021 Law and Mobility Conference’s focus on equity, the Journal of Law & Mobility Blog will publish a series of blog posts surveying the civil rights issues with connected and autonomous vehicle development in the U.S. This is the fourth and final part of the AV & Civil Rights series. Part 1 focuses on Title VI of the Civil Rights Act. Part 2 focuses on the Americans with Disabilities Act. Part 3 focuses on Title II of the Civil Rights Act.

Your data says a lot about you, and widescale adoption of connected and automated vehicles (AVs) will create mountains of data that says even more. Based on location data alone, AV companies may know where you live, where you work, what you like to do in your free time, who you hang out with, and possibly even your religious and political beliefs. And, again, this is just based on location data; AV companies will also have extensive records of your biometric and financial information. Overall, AVs can provide constant and near-comprehensive surveillance. So what happens when the government gets access to surveillance collected by private companies and relies on it in criminal investigations?

This was the real-life nightmare of Robert Julian-Borchak Williams, a Black man in the Detroit area who was arrested and charged for a crime that he didn’t commit. A facial recognition algorithm employed by the Detroit Police Department matched his face to a surveillance image from a robbery. Williams is known as the first person arrested as the result of a bad algorithm.

As of fall 2020, at least 360 police departments use facial recognition technologies, 24 use automated data analysis tools, and 26 use predictive policing measures, which aim to identify crimes before they happen by relying on historical data (which has been shown to be racially discriminatory and ineffective). Over 1,000 police departments use surveillance drones, which were deployed to track down and arrest Black Lives Matter protestors last summer. If AVs become a part of this network of technologies, this surveillance will become even more invasive, particularly for Black passengers and pedestrians that both police and artificial intelligence tend to manifest bias towards.

This trend of warrantless surveillance is constitutionally dubious. The Fourth Amendment protects U.S. persons from “unreasonable searches and seizures” without a warrant. Courts have considered surveillance to be an “unreasonable search or seizure” if it invades a reasonable expectation of privacy. The scope of Fourth Amendment protections was narrowed in United States v. Miller and Smith v. Maryland, where the Supreme Court held that there is no reasonable expectation of privacy if information is purposefully provided to third parties. In these cases, the Court held that the government could obtain bank records and transactional phone call data without a warrant because that information was consensually relayed to third parties (namely, bank and phone companies). However, this third party doctrine was abrogated in 2015 in United States v. Carpenter, when the Supreme Court held that a warrant was required to collect over four and half months’ worth of cell-site location information (CSLI) for the defendant, a robbery suspect. The Court noted that, if third party doctrine were applied to CSLI, “[o]nly the few without cell phones could escape . . . tireless and absolute surveillance.”

The Carpenter framework could be applied to AVs, based on the potential comprehensiveness of AV surveillance; the intimate information that AV surveillance could reveal; how cheap it would be for the government to rely on AV for both ongoing and retrospective surveillance; and the questionable level of voluntariness through which AV users would “provide” their information to companies. Thus, the application of the Carpenter framework to AV by judges is one way to avoid the incorporation of AVs into the surveillance state. On the legislative side, we could see a comprehensive federal privacy bill soon. However, policing is squarely in the state and local purview, and it is hard to say how much a federal law could reach into these surveillance issues as both a legal and a practical matter.

States like California and Virginia already have comprehensive privacy laws on the books that protect consumers with certain rights, including the right to know what data is being collected about them by large, private companies, and the right to opt out of the sale of personal information by these companies. However, government and nonprofit entities are explicitly exempt from these laws. Moving forward, privacy laws should include protection from government overreach, not just corporate overreach.

AV companies can certainly take action as well. Around the time of the Williams arrest, Amazon, Microsoft, and IBM announced that they would pause or stop offering their facial recognition data to law enforcement. These moves were largely symbolic, as police mostly rely on companies that are not household names for their data. In the AV context, company policies against warrantless surveillance and partnerships with law enforcement could provide users with some peace of mind.

Throughout my Civil Rights Series, I have emphasized the importance of data transparency so that agencies like the Department of Justice’s Civil Rights Division can easily track and investigate discriminatory impacts of AVs. “Transparency,” however, cannot be a mechanism for extending the surveillance state to these vehicles. Our increasingly connected and data-driven transportation systems cannot throw our privacy rights under the (connected and automated) bus.

By Christopher Chorzepa and Phillip Washburn


Week 2 of the 2021 Law and Mobility Conference opened with a discussion, moderated by C. Ndu Ozor, focusing on a variety of topics: inequalities and equity issues in our transportation system, how to prevent new transportation tech from exacerbating these issues, and how new tech can potentially help correct injustices. 

Dr. Regan F. Patterson began the panel by highlighting that automobile-dominated systems have destructive impacts on Black people and communities, and that we must explicitly consider impacts on racial violence during the transition to other technologies. Dr. Patterson highlighted how cars are frequent sites of violence against Black people, as seen in the interactions between police and George Floyd, Sandra Bland, and countless others. Citing pieces by Tamika Butler and Brentin Mock, Dr. Patterson stressed that policymakers and developers of shared electric and automated vehicles (SEAVs) must explicitly think about whether this technology can make transportation safer for Black people and diminish racial violence. 

Sadly, transportation planning has long not accomplished these goals. It has been used as a tool of oppression, deliberately targeting Black communities. Highway construction destroyed Black neighborhoods and placed heavily trafficked highways closer to communities of color, resulting in environmental justice concerns due to high levels of emissions contributing to poor health outcomes. Further, Dr. Patterson framed climate change as a racial justice concern, since its impacts fall unevenly on the most vulnerable communities. She expressed a desire for a transportation system that reduces Black harm, affirms Black life, and ensures livable Black futures.

Dr. David Rojas-Rueda focused on how transportation policies and technologies shape public health. Dr. Rojas said that emerging transportation technologies should consider impacts on human health, focusing on how they impact urban design (surroundings and ability to get places affects health), human behavior (physical activity affects health), disease, and mortality from accidents. Examining micromobility, Dr. Rojas found that substitution to e-scooters — from bikes, public transit, or cars — may result in different impacts on health based on the current transportation composition of the city. 

In Atlanta, substitution to e-scooters was harmful because of increased risk of traffic fatalities and reduced physical activity. In contrast, it was beneficial in Portland because e-scooters were associated with fewer traffic incidents. Examining SEAVs, Dr. Rojas said that human health impacts will vary based on how we handle the transition. He highlighted that SEAVs might affect health by increasing autonomy of those who cannot drive (children, elderly, and disabled folks), reducing road deaths and injuries (although this would result in reduced organ donations), present presently unknown risks from increased exposure to electromagnetic fields, reducing stress from driving (but potentially increase stress through time spent working while commuting), and increasing use of alcohol and drugs (through reduced need for designated drivers). Dr. Rojas emphasized that we need to prioritize the deployment of SEAVs in low-income areas because road injuries and deaths tend to be more common in disadvantaged areas, and these communities have traditionally been underserved by transportation planning. Thus, the increased autonomy and reduced risk of road accidents from SEAVs would greatly benefit human health in disadvantaged neighborhoods.

Robin Chase stressed two problems: (1) there is an “unseen fifty percent” of the population that does not have access to safe and reliable transportation because they do not have a driver’s license or access to a car, or they do not have the money to gain access to a car or other form of transportation; and (2) whereas we used to have a background reality of a right to mobility, we have now made it safer to cross the ocean in a plane than to cross the road in an automobile, so that the unseen fifty percent is now unable to move without being subjected to high risk of injury or death. 

Ms. Chase proposed that we fix these problems by increasing access to shared mobility. She added that shared mobility would also have equity benefits, since using shared mobility would increase physical activity (putting a dent in the obesity epidemic, which disproportionately affects BIPOC), reduce the volume of traffic accidents (which also disproportionately affects BIPOC), and reduce emissions (climate change disproportionately affects BIPOC). Thus, she proposed that the government shift spending priorities away from SEAVs to public transit. Ms. Chase finished her presentation stressing the equity benefits derived from the implementation of emerging transportation technology while emphasizing the potential abuse for user data surveillance purposes assembled from digitized travel.

In discussion, the panelists highlighted that transportation inequities often exacerbate housing and employment inequities, and stressed that transportation and housing must be planned together to achieve the best outcomes for racial, health, and economic equity. Dr. Patterson noted that transit systems have often been used to facilitate gentrification and suburbanization, and stressed that there needs to be a solution like van-pooling services to get between housing centers and transit hubs to deal with these problems. 

The panelists agreed that disadvantaged communities need to be prioritized during transportation planning because transit improvements need to benefit everyone, not just affluent communities. Because public transit is used more intensively than SEAVs, government spending priorities need to shift if we want to do the most good for the most people. To that end, the panelists set a goal of allowing poor and Black people to safely live car-independent lives, rather than our current focus on providing subsidies to already rich people. For instance, we provide tax incentives to put solar panels on your home (benefitting homeowners) and buy an electric vehicle (benefitting car-owners).

The final issue considered by the panelists was how much startups and smaller companies should be regulated to pursue equity goals. Dr. Patterson stated that equity needs to be inserted into business models from the beginning because it traditionally has been ignored and led to inequitable outcomes. Otherwise, biased outcomes can be programmed into automated systems. Dr. Patterson firmly believed that switching course mid-stream is not feasible, and equity needs to be a primary consideration at the outset. Further, Dr. Rojas felt that policymaking should be proactive and made in an interdisciplinary function, incorporating equity and innovation concerns.

On the other hand, Ms. Chase felt that there should be a two-tiered regulation scheme with more onerous equity regulations for large companies and less red tape for startups. Ms. Chase emphasized that part of the problem faced by transportation startups is that they are not financially rewarded for their positive externalities on equity, while cars do not have to pay for the emissions, parking, and road damage they cause. Thus, she stated that companies with low volume and slim profit margins should receive less regulation so that they may grow and innovate. 

The question of when the government should require companies meet certain transportation goals is an important one. Soft-regulation can foster innovation, but may leave blind spots that persist past initial stages. Early and consistent regulation may end some startups before the get going, but would ensure the companies that survive have the right goals. Regardless of when it enters the stage, it is important that equity be part of all transit solutions.

In light of the 2021 Law and Mobility Conference’s focus on equity, the Journal of Law & Mobility Blog will publish a series of blog posts surveying the civil rights issues with connected and autonomous vehicle development in the U.S. This is the third part of the AV & Civil Rights series. Part 1 focuses on Title VI of the Civil Rights Act. Part 2 focuses on the Americans with Disabilities Act. Part 4 focuses on the Fourth Amendment.

As Bryan Casey discussed in Title 2.0: Discrimination Law in a Data-Driven Society, there are a growing number of studies that indicate racial disparities in wait times, ride cancellation rates, and availability for rideshares and delivery services like Uber, Lyft, and GrubHub. Given that, for the most part, humans are behind the wheel in these cars, these disparities are the aggregate result of both conscious and unconscious biases. Drivers can choose where they pick up passengers, meaning that neighborhoods associated with marginalized demographics have less cars available at any given moment. Drivers may see a passenger’s name and decline that passenger based on assumptions about their race. The passenger rating system is also a challenge. Drivers may—again, consciously or unconsciously—be more judgmental of a black passenger than a white passenger when rating them between 1 and 5 at the end of a ride. Ratings can undermine users’ ability to nab a car quickly and can even get users kicked off of platforms.

As Uber and other companies transition to connected and automated vehicles (AV), they have promoted the artificial intelligence (AI) that these vehicles will rely on as the solution to what they frame as a very human problem of bias. However, as growing numbers of studies are showing, AI can be just as discriminatory as people. After all, people with biases make machines and program algorithms, which in turn learn from people in the world, who also have biases. As the Wall Street Journal recently reported, “AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites.” And when AI is discriminatory, this can manifest on a broader scale than when it is just one discriminatory person behind a wheel. Accordingly, switching from human drivers to computer drivers will not end transportation access issues based on racial disparate impact, absent a concerted effort by AV companies, and perhaps by the government, to fight algorithmic discrimination.

Where does the law enter for this type of discrimination? It isn’t clear.

Title II of the Civil Rights Act of 1964 broadly mandates that all people in the U.S., regardless of “race, color, religion, or national origin,” are entitled to “full and equal enjoyment” of places of public accommodation, which are defined as any establishments that affect interstate commerce. Access to transportation undoubtedly affects participation in interstate commerce. And yet, as Casey reported, “[i]t is unclear whether Title II covers conventional cabs, much less emerging algorithmic transportation models,” including rideshare systems that have explicitly resisted categorization as public accommodations. It also remains unclear whether discrimination claims based on statistical evidence of race discrimination are cognizable under Title II, particularly given the judiciary’s increasing reluctance to remedy state and private action with a discriminatory impact, rather than clear evidence of racially discriminatory intent.

Professor Casey advocated updates to Title II as one manner to combat discrimination in rideshares. Particularly, Congress should clarify that the statute cognizes statistically based claims and that it covers “data-driven” transportation models. This is not unheard of; the Fair Housing Act covers disparate impact. Since we published Title 2.0, there have not been any litigation or policy updates in this area.

Accordingly, it will likely be up to AV companies themselves to ensure that people are not denied the benefit of access to AV on the basis of their race (or gender, socioeconomic status, or disability, for that matter) because of discriminatory algorithms. Experts have suggested reforms including frequent inventories of discriminatory impact of AI, adjusting data sets to better represent marginalized groups, reworking data to account for discriminatory impacts, and, if none of these steps work, adjusting results to affirmatively represent more groups. At a minimum, transparency is key for both the government and concerned individuals to assess whether AV has a discriminatory impact, and any data or findings should be widely published and shared by these companies.

Despite the downfalls of today’s rideshares discussed above, black users have still praised this technology as easier than hailing a cab on the street. In that way, AVs still have the opportunity to be another step towards transportation equity.