In light of the 2021 Law and Mobility Conference’s focus on equity, the Journal of Law & Mobility Blog will publish a series of blog posts surveying the civil rights issues with connected and autonomous vehicle development in the U.S. This is the third part of the AV & Civil Rights series. Part 1 focuses on Title VI of the Civil Rights Act. Part 2 focuses on the Americans with Disabilities Act. Part 4 focuses on the Fourth Amendment.
As Bryan Casey discussed in Title 2.0: Discrimination Law in a Data-Driven Society, there are a growing number of studies that indicate racial disparities in wait times, ride cancellation rates, and availability for rideshares and delivery services like Uber, Lyft, and GrubHub. Given that, for the most part, humans are behind the wheel in these cars, these disparities are the aggregate result of both conscious and unconscious biases. Drivers can choose where they pick up passengers, meaning that neighborhoods associated with marginalized demographics have less cars available at any given moment. Drivers may see a passenger’s name and decline that passenger based on assumptions about their race. The passenger rating system is also a challenge. Drivers may—again, consciously or unconsciously—be more judgmental of a black passenger than a white passenger when rating them between 1 and 5 at the end of a ride. Ratings can undermine users’ ability to nab a car quickly and can even get users kicked off of platforms.
As Uber and other companies transition to connected and automated vehicles (AV), they have promoted the artificial intelligence (AI) that these vehicles will rely on as the solution to what they frame as a very human problem of bias. However, as growing numbers of studies are showing, AI can be just as discriminatory as people. After all, people with biases make machines and program algorithms, which in turn learn from people in the world, who also have biases. As the Wall Street Journal recently reported, “AI systems have been shown to be less accurate at identifying the faces of dark-skinned women, to give women lower credit-card limits than their husbands, and to be more likely to incorrectly predict that Black defendants will commit future crimes than whites.” And when AI is discriminatory, this can manifest on a broader scale than when it is just one discriminatory person behind a wheel. Accordingly, switching from human drivers to computer drivers will not end transportation access issues based on racial disparate impact, absent a concerted effort by AV companies, and perhaps by the government, to fight algorithmic discrimination.
Where does the law enter for this type of discrimination? It isn’t clear.
Title II of the Civil Rights Act of 1964 broadly mandates that all people in the U.S., regardless of “race, color, religion, or national origin,” are entitled to “full and equal enjoyment” of places of public accommodation, which are defined as any establishments that affect interstate commerce. Access to transportation undoubtedly affects participation in interstate commerce. And yet, as Casey reported, “[i]t is unclear whether Title II covers conventional cabs, much less emerging algorithmic transportation models,” including rideshare systems that have explicitly resisted categorization as public accommodations. It also remains unclear whether discrimination claims based on statistical evidence of race discrimination are cognizable under Title II, particularly given the judiciary’s increasing reluctance to remedy state and private action with a discriminatory impact, rather than clear evidence of racially discriminatory intent.
Professor Casey advocated updates to Title II as one manner to combat discrimination in rideshares. Particularly, Congress should clarify that the statute cognizes statistically based claims and that it covers “data-driven” transportation models. This is not unheard of; the Fair Housing Act covers disparate impact. Since we published Title 2.0, there have not been any litigation or policy updates in this area.
Accordingly, it will likely be up to AV companies themselves to ensure that people are not denied the benefit of access to AV on the basis of their race (or gender, socioeconomic status, or disability, for that matter) because of discriminatory algorithms. Experts have suggested reforms including frequent inventories of discriminatory impact of AI, adjusting data sets to better represent marginalized groups, reworking data to account for discriminatory impacts, and, if none of these steps work, adjusting results to affirmatively represent more groups. At a minimum, transparency is key for both the government and concerned individuals to assess whether AV has a discriminatory impact, and any data or findings should be widely published and shared by these companies.
Despite the downfalls of today’s rideshares discussed above, black users have still praised this technology as easier than hailing a cab on the street. In that way, AVs still have the opportunity to be another step towards transportation equity.