Data

If there are any ideas that the internet believes to be the truth in this modern day in age, I think that the following would at least make the list: the government is likely watching you through the camera in your laptop, and Facebook’s algorithm may know you better than anyone else. While the internet normalizes being surveilled – and George Orwell can be heard continuously rolling over in his grave – the collection, analysis, and sale of information and user data is something to, at the very least, keep in mind.

Target can predict when a shopper is due to give birth based on subtle changes in shopping habits (going from scented to unscented soap, for example); your phone tracks where you are and how often you go to the point that it recognizes your patterns and routines, suggesting certain destinations you visit regularly; and health insurance companies believe they can infer that you will be too expensive to cover simply from looking at your magazine subscriptions, whether you have any relatives living nearby, and how much time you spend watching television. It is both fascinating and startling in equal measure.

When we narrow our focus to transportation and mobility, there is still an entire world of information that is being collected, sold, and turned into, for example, new marketing strategies for companies purchasing that data from brokers. Other times, the actor using that data-turned-actionable intelligence is a government entity. Either way, it’s good know and understand some of what is being collected and how it may be used, even if it’s only the tip of the iceberg. Car insurance companies track and collect data on how often drivers slam on brakes or suddenly accelerate and offer rewards for not doing those things. People have been subjected to police suspicion or even been arrested based on incorrect geolocation data collected from their cell phones.

Despite the potentially grim picture I may have painted, user data isn’t always wielded for evil or surveillance. Recently, popular navigation app Waze added a feature that allows its users to report unplowed roads plaguing drivers during the winter months. The feature was developed through collaboration with the Virginia Department of Transportation (VDOT). Users in areas with inclement winter weather are now notified when they are coming upon a roadway that is reportedly in need of a snowplow. In addition to providing users with information and warnings, Waze also partners with transportation agencies across the U.S. and provides these agencies or local governments with this winter transportation information through the Waze for Cities Data program. The point is to make responsible parties aware of the areas that are still in need of a snowplow and assist them in prioritizing and deploying resources.

This sort of data collection is innocent enough and helpful in a person’s everyday life. According to Waze, the data is anonymized and contains no personally identifiable information (PII) when it becomes accessible to government agencies. However, as cars and cities become smarter the risk of an individual user’s data being used for more concerning purposes is likely to increase. This danger is in addition to the privacy risks that come from carrying around and depending upon personal devices such as cell phones.

“[Cars are] data-collecting machines that patrol the streets through various levels of autonomy. That means that our mobility infrastructure is no longer static either, that infrastructure is now a data source and a data interpreter.”

Trevor English, InterestingEngineering.com

Uber went through a phase of tracking users even while not using the app; a number of smart city technologies are capable of capturing and combining  PII and household level data about individuals; and the City of Los Angeles wants to collect real-time data on your individual e-scooter and bikeshare trips – California’s legislature doesn’t exactly agree. As these capabilities are advancing, so is the law, but that doesn’t necessarily mean that the race is a close one. So, while our cars and scooters and rideshare apps may not yet be the modern iteration of Big Brother, there’s always tomorrow.

Several major OEMs have recently announced scaling back of their shared or automated mobility ventures. Ford and Volkswagen are giving up investments in “robotaxis” – the CEO of their software partner, Argo, was quoted saying he “hates the word” anyway – and similar services operated by German automakers are withdrawing from various markets or shutting down altogether, after overextending themselves during the last 18 months.

Two separate trends seem to contribute to that movement. The first one, car ownership is still growing worldwide, albeit modestly – roughly 1% per year over the last ten years in Germany, for example – while sales of new cars is slumping. It is important to differentiate these two: while new car sales affect the revenues of OEMs, and may indicate changes in consumption patterns, car ownership rates indicate people’s attitude vis-à-vis car ownership better. In that sense, we see a continued attachment to personal car ownership, a cultural phenomenon that is much more difficult to displace or even disrupt than what some may have thought previously. Hence, the dreaded “peak car” that will relegate the iconic 20th century consumer good to museums may not materialize for a while.

The second trend has to do with an observation made time and again: OEMs are not naturally good at running mobility services: their business is making cars. As one bank analyst put it, no one expects Airbus or Boeing to run an airline. Why should it be any different with car OEMs? Thinking about the prospects of automation, it became commonplace for large industrial players to partner with specialized software developers to develop the automated driving system. That may result in a great product, but it does not give create a market and a business plan when it comes to the AVs themselves. As it turned out, the main business plan, which was to use these cars as part of large car-sharing services or sell them to existing mobility operators, ran into a some roadblocks: OEMs found themselves competing with already existing mobility operators in a difficult market; and putting an AV safely on the road is a much more daunting task than once thought. As 2019 comes to a close, we have yet to see an actual commercial “robotaxi” deployment outside of test runs.

This second trend puts a large question mark on the short and medium term financial viability of investments in “robotaxis” and automated mobility operations, generally. OEMs and their partners, looking for ways to put all those vehicle automation efforts to profitable use, look at other markets, such as heavy, non-passenger road and industrial vehicles. Nevertheless, no one seems poised to completely exit the automated passenger mobility market; they all keep a foot in the door, continuing their tests and “gathering more data,” in order to allegedly understand the mobility needs of road users. Beyond these noble intentions, however, there is an exit plan: if all else fails, they can monetize their data sets to data hungry software developers.

In the end, this comes back to a point frequently addressed on this blog, that of safety. Technological advances in automation (broadly speaking) are bringing increased safety to existing cars, and they will continue to do so. We might have become overly fixated by the golden goose of the “Level 5” robotaxi (or even Level 3), which may or may not come in the next ten years, neglecting the low-hanging fruit. While laugh at our ancestors dreaming about flying cars for the year 2000, our future selves scoff at us for chasing robotaxis by 2020.

On April 8, 2019, it was announced at the 35th Space Symposium in Colorado Springs, Colorado that the space industry was getting an Information Sharing and Analysis Center (ISAC). Kratos Defense & Security Solutions, “as a service to the industry and with the support of the U.S. Government,” was the first founding member of the Space-ISAC (S-ISAC).

“[ISACs] helps critical infrastructure owners and operators protect their facilities, personnel and customers from cyber and physical security threats and other hazards. ISACs collect, analyze and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency.”

National Council of ISACs

ISACs, first introduced in Presidential Decision Directive-63 (PDD-63) in 1998, were intended to be the one aspect of the United States’ development of “measures to swiftly eliminate any significant vulnerability to both physical and cyber attacks on our critical infrastructures, including especially our cyber systems.” PDD-63 requested “each critical infrastructure sector to establish sector-specific organizations to share information about threats and vulnerabilities.” In 2003, Homeland Security Presidential Directive 7 (HSPD-7) reaffirmed the relationship between the public and private sectors of critical infrastructure in the development of ISACs.

Today, there are ISACs in place for a number of subsectors within the sixteen critical infrastructure sectors, for specific geographic regions, and for different levels of government.

However, the S-ISAC, while undoubtedly a good call, has left me with a few questions.

Why so much government involvement?

From what I’ve read, the Federal government’s role is to “collaborate with appropriate private sector entities and continue to encourage the development of information sharing and analysis mechanisms.” For example, the Aviation-ISAC (A-ISAC) was formed when “[t]here was consensus that the community needed an Aviation ISAC”; the Automotive-ISAC (Auto-ISAC) came into being when “[fourteen] light-duty vehicle [Original Equipment Manufacturers] decided to come together to charter the formation of Auto-ISAC”; and the Information Technology-ISAC (IT-ISAC) “was established by leading Information Technology Companies in 2000.”

Reportedly, it was not the private actors within the space industry that recognized or felt the need for the S-ISAC, but an interagency body designed to keep an eye on and occasionally guide or direct efforts across space agencies. The Science and Technology Partnership Forum has three principle partner agencies: U.S. Air Force (USAF) Space Command, the National Aeronautics and Space Administration (NASA), and the National Reconnaissance Office (NRO).

Additionally, it appears as though Kratos, a contractor for the Department of Defense and other agencies, was the only private actor involved in the development and formation of the S-ISAC.

These are just something to keep in mind. The S-ISAC’s perhaps unique characteristics must be considered in light of the clear national security and defense interests that these agencies and others have in the information sharing mechanism. Also, since the announcement of the S-ISAC, Kratos has been joined by Booz Allen Hamilton, Mitre Corporation, Lockheed Martin, and SES as founding members.

Why an ISAC?

Again, ISACs are typically the domain of the private owners, operators, and actors within an industry or sector. As new vulnerabilities and threats related to the United States’ space activities have rapidly manifested in recent years and are quickly emerging today, it would seem to make sense for the Federal government to push for the development of an Information Sharing and Analysis Organization (ISAO). ISAOs, formed in response to Executive Order 13691 (EO 13691) in 2015, are designed to enable private companies and federal agencies “to share information related to cybersecurity risks and incidents and collaborate to respond in as close to real time as possible.”

While ISAOs and ISACs share the same goals, there appear to be a number of differences between the two information-sharing mechanisms. ISACs can have high membership fees that individual members are responsible for, potentially blocking smaller organizations or new actors from joining, and that often work to fund the sector’s ISAC; however, grants from the Department of Homeland Security (DHS) are available to provide additional funding for the establishment and continued operation of ISAOs.  ISACs – for example, the A-ISAC – seem to monitor and control the flow of member-provided information available to the Federal government more closely than ISAOs.

Also, ISACs – such as those recognized by the National Council of ISACs (NCI) – are typically limited to sectors that have been designated as Critical Infrastructure and the associated sub-sectors. Despite obvious reasons why it should, space has not been recognized as a critical infrastructure sector.

For now, this seems like a good place to end. This introductory look into ISACs generally and the S-ISAC has left me with many questions about the organization itself and its developing relationship with the private space industry as a whole. Hopefully, these questions and more will be answered in the coming days as the S-ISAC and the private space industry continue to develop and grow. 

Here are some of my unaddressed questions to consider while exploring and considering the new S-ISAC: Why develop the S-ISAC now? What types of companies are welcome to become members, only defense contractors or, for example, commercial satellite constellation companies and small rocket launchers? As the commercial space industry continues to grow in areas such as space tourism, will the S-ISAC welcome these actors as well or will we see the establishment of a nearly-identical organization with a different name?

October 2019 Mobility Grab Bag

Every month brings new developments in mobility, so let’s take a minute to breakdown a few recent developments that touch on issues we’ve previously discussed in the blog:

New AV Deployments

This month saw a test deployment of Level 4 vehicles in London, which even allowed members of the public to be passengers (with a safety driver). Meanwhile, in Arizona, Waymo announced it will be deploying vehicles without safety drivers, though it appears only members of their early-access test group will be riding in them for now. We’ve written a lot about Waymo, from some early problems with pedestrians and other drivers, to the regulations placed on them by Arizona’s government, to their potential ability to navigate human controlled intersections.

Georgia Supreme Court Requires a Warrant for Vehicle Data

This Monday, the Georgia Supreme Court, in the case of Mobley v. State, ruled that recovering data from a vehicle without a warrant “implicates the Fourth Amendment, regardless of any reasonable expectations of privacy.” The court found that an investigator entering the vehicle to download data from the vehicle’s airbag control unit constituted “physical intrusion of a personal motor vehicle,” an action which “generally is a search for purposes of the Fourth Amendment under the traditional common law trespass standard.” Given the amount of data that is collected currently by vehicles and the ever-increasing amount of data that CAVs can and will collect, rulings like this are very important in dictating how and when law enforcement can obtain vehicle data. We’ve previously written about CAVs and the 4th Amendment, as well as other privacy implications of CAVs, both in regards to government access to data and the use of CAV data by private parties.  

Personal Cargo Bots Could Bring Even More Traffic to Your Sidewalk

In May, as part of a series on drones, I wrote about a number of test programs deploying small delivery bots for last-mile deliveries via the sidewalk. A recent Washington Post article highlights another potential contender for sidewalk space – personal cargo bots. Called “gita” the bot can travel at up to 6 mph as it uses it’s onboard cameras to track and follow its’ owner, via the owner’s gait. The bot’s developers see it as helping enhance mobility, as it would allow people to go shopping on foot without being concerned about carrying their goods home. For city-dwellers that may improve grocery trips, if they can shell out the $3,000+ price tag!

Even More Aerial Drones to Bring Goods to Your Door

Last month, as part two the drone series, I looked at aerial delivery drones. In that piece I mentioned that Google-owned Wing would be making drone deliveries in Virginia, and Wing recently announced a partnership with Walgreens that will be part of that test. Yesterday Wired pointed out that UPS has made a similar deal with CVS – though it remains to be seen if the drones will have to deliver the infamously long CVS receipts as well. As Wired pointed out, drugstores, since they carry goods that could lead to an emergency when a home runs out of them (like medication and diapers), speedy air delivery could fill a useful niche. So next time you’re home with a cold, you may be able to order decongestant to be flown to your bedside, or at least to the yard outside your bedroom window.

P.S. – While not related to any past writings, this article  is pretty interesting – Purdue scientists took inspiration from the small hairs on the legs of spiders to invent a new sensor that can ignore minor forces acting on a vehicle while detecting major forces, making it easier for CAVs and drones to focus computing power on important things in their environment without getting distracted.

In 2015, Google’s parent, Alphabet, decided the time was ripe for establishing a subsidiary in charge of investing in “smart infrastructure” projects – from waste to transport and energy. Its aim was specifically to implement such projects, transforming our urban landscape into a realm of dynamic and connected infrastructure pieces. Fast forward two years, and Sidewalk Labs had become embroiled in a smart city project covering a somewhat derelict (but highly valuable) area of the Toronto along the shores of Lake Ontario. 

Already in 2001, the Canadian metropolis set up the aptly named Waterfront Toronto (WT), a publicly-controlled corporation in charge of revitalizing the whole Lake Ontario waterfront along the city. WT then published early in 2017 a “Request for Proposals,” looking for an “investment and funding partner” for what would become known as the Quayside project. By the end of the year, the Alphabet subsidiary was chosen by WT.

It is important to note that this project was initially thought as a real estate one, and the desired innovation was to be found in building materials and carbon neutrality, while achieving certain goals in terms of social housing. There was no express desire for a model “smart city” of any sort, although the document does mention the usage of “smart technologies,” but always in the context of reducing building costs and improving the carbon footprint. 

Critics were quick to point out the puzzling choice; as innovative as it may be, Alphabet has no experience in real estate development. Rather, its core business is data processing and analytics, sometimes for research and often for advertisement purposes. What was meant to be a carbon-positive real-estate project seemed to be morphing into a hyper-connected (expensive) urban hub. 

And then came Sidewalk Labs’ detailed proposal. The visuals are neat; tellingly, there is not a single electronic device to be found in those pictures (is that one man on his cellphone?!) The words, however, tell another story. Carbon footprint and costs of building take a second seat to (personal) data processing: “Sidewalk expects Quayside to become the most measurable community in the world,” as stated in their winning proposal. One wonders whether the drafters of the proposal sincerely thought that, in this day an age, such a statement would fly with the public opinion. 

Critics of the project (who have since coalesced in the #BlockSidewalk movement) used the opportunity to dig deeper into WT itself, highlighting governance issues and the top-down character of the original Request for Proposals, beyond the plethora of data privacy questions (if not problems) the Sidewalk Labs proposal raised. In response, Sidewalk Labs deployed a vast campaign of public relations, whose success is far from guaranteed: they have “upgraded” their project, aiming for a bigger plot of land and even a new light rail plan (funded mostly on public money). At the time of this writing, WT has yet to make its final decision whether to retain the project of the Alphabet’s subsidiary. 

What lessons can we draw from this Toronto experience? “Smart city” projects are bound to become more commonplace, and while this one was not meant as such, some will be more straightforward in their aims. First, we should question the necessity of connecting every single thing and person. It matters to have in mind the social objectives of a given project, such as carbon footprint or building costs reduction. Collection of personal data can thus be articulated around and in function of those objectives, rather than as an end in itself. Connecting the park bench may be fancy, but for what purpose? More down to earth, the same question can be asked of street lights. 

As Christof Spieler reminds us in a recent tweet thread, certain municipal governments may be approached with “free” turnkey projects of connected infrastructure, in exchange (oh wait, it’s not free?) of both data and integration of the developer’s pre-existing systems into that infrastructure. Think of advertisements, and all the other possible monetization avenues… As Spieler points out, monetized smart infrastructure may come at a heavy social cost. 

Beyond that, one may wonder – who do we want as developers of such projects? Do we need the Sidewalk Labs of this world to realize the post-industrial heaven shown in the visuals of the Proposal? How will multinational data crunchers with an ominous track record make our cities smarter? The burden of proof is on them.

The European Union recently adopted new rules to help consumers repair household appliances like refrigerators and televisions. The rules require manufacturers to provide spare parts for years after sale – the number of years depending on the device. The “Ecodesign Directive” is intended to help protect the environment by extending the life of consumer appliances. The regulation also applies to servers, requiring firmware updates for 7 years post-production. These regulations are part of a larger battle over consumers’ right to repair their belongings, including vehicles. Vehicles are already part of the right to repair discussion, and the deployment of technically complicated CAVs will ramp up that conversation, as some manufacturers seek to limit the ability of individuals to repair their vehicles.

One current battle over the right to repair is taking place in California. In September of last year, the California Farm Bureau, the agricultural lobbying group that represents farmers, gave up the right to purchase repair parts for farm equipment without going through a dealer. Rather than allowing farmers to buy parts from whomever they’d like, California farmers have to turn to equipment dealers, who previously were unwilling to even allow farmer’s access to repair manuals for vehicles they already owned. A big part of the dispute stems from companies like John Deere placing digital locks on their equipment that prevent “unauthorized” repairs – i.e. repairs done by anyone other than a John Deere employee. The company even made farmers sign license agreements forbidding nearly all repairs or modifications, and shielding John Deere from liability for any losses farmers may suffer from software failures. Some farmers resorted to using Ukrainian sourced firmware to update their vehicle’s software, rather than pay to hire a John Deere technician. The California case is especially ironic, as the state has solid right to repair laws for other consumer goods, requiring companies to offer repairs for electronics for 7 years after production (though companies like Apple have been fighting against the state passing even more open right to repair laws).

In 2018, supporters of the right to repair were boosted by a copyright decision from the Librarian of Congress, which granted an exception to existing copyright law to allow owners and repair professionals to hack into a device to repair it. The exception is limited, however, and doesn’t include things like video game consoles, though its’ language did include “motorized land vehicles.”

So how could battles over the right to repair influence the deployment of CAVs? First off, given the amount of complicated equipment and software that goes into CAVs, regulations like those recently adopted in the EU could help extend the lifespan of a vehicle. Cars last a long time, with the average American vehicle being 11.8 years old. Right to repair laws could require manufactures to supply the parts and software updates needed to keep CAVs on the road. New legislation could protect consumer access to the data within their vehicle, so they don’t have to rely on proprietary manufacturer systems to know what’s going on inside their vehicle. A 2011 study of auto repair shops showed a 24% savings for consumers who used a third-party repair shop over a dealership, so independent access to data and spare parts is vital to keeping consumer maintenance costs down. People are very used to taking their cars to independent repair shops or even fixing them at home, and many consumers are likely to want to keep their ability to do so as CAVs spread into service.

P.S. – Two updates to my drone post from last week:

Update 1 – University of Michigan (Go Blue!) researchers have demonstrated a drone that can be used to place shingles on a roof, using an interesting system of static cameras surrounding the work-site, rather than on-board cameras, though it remains to be seen how many people want a nail gun equipped drone flying over their head…

Update 2 – UPS has been granted approval to fly an unlimited number of delivery drones beyond line-of-sight, though they still can’t fly over urban areas. They have been testing the drones by delivery medical supplies on a North Carolina hospital campus.

For the past several months, this blog has primarily focused on new legal questions that will be raised by connected and automated vehicles. This new transportation technology will undoubtedly raise novel concerns around tort liability, traffic stops, and city design. Along with raising novel problems, CAVs will also add new urgency to longstanding legal challenges. In some ways, this is best encapsulated in the field of privacy and data management.

In recent decades, the need to understand where our data goes has increased exponentially. The smartphones that most of us carry around every day are already capable of tracking our location, and recording a lot of our personal information. In addition to this computer/data generation machine in our pockets, the CAV will be a supercomputer on wheels, predicted to generate 4,000 gigabytes of data per day. Human driven vehicles with some automated features, such as Tesla’s with the company’s “Autopilot” functionality, already collect vast amounts of user data. Tesla’s website notes that the company may access a user’s browsing history, navigation history, and radio listening history, for example.

In response to this growing concern, California recently passed a sweeping new digital privacy law, set to take effect in 2020. Nicknamed “GDPR-Lite” after the European Union’s General Data Protection Regulation, California’s law “grants consumers the right to know what information companies are collecting about them, why they are collecting that data and with whom they are sharing it.” It also requires companies to delete data about a customer upon request, and mandates that companies provide the same quality and cost of service to users who opt out of data collection as those who opt in.

In comparison to the GDPR, California’s law is relatively limited in scope. The California Consumer Privacy Act (CCPA) is tailored to apply only to businesses that are relatively large or that are primarily engaged in the business of collecting and selling personal data. Furthermore, CCPA contains few limitations on what a business can do internally with data it collects. Instead, it focuses on the sale of that data to third parties.

In many ways, it remains too early to evaluate the effectiveness of California’s approach. This is in part because the law does not take effect until the beginning of next year. The bill also enables the California Attorney General to issue guidance and regulations fleshing out the requirements of the bill. These as-yet-unknown regulations will play a major role in how CCPA operates in practice.

Regardless of its uncertainties and potential shortcomings though, CCPA is likely to play a significant role in the future of American data privacy law and policy. It is the first significant privacy legislation in the US to respond to the recent tech boom, and it comes out of a state that is the world’s fifth largest economy. CCPA’s implementation will undoubtedly provide important lessons for both other states and the federal government as they consider the future of data privacy.

By Bryan Casey

Cite as: Bryan Casey, Title 2.0: Discrimination Law in a Data-Driven Society, 2019 J. L. & Mob. 36.

Abstract

More than a quarter century after civil rights activists pioneered America’s first ridesharing network, the connections between transportation, innovation, and discrimination are again on full display. Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combatting stubbornly persistent barriers to transportation. But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in companies including Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery.

Surveying the methodologies employed by these studies reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical level, not an individual one. As a structural matter, this isn’t coincidental. As America transitions to an increasingly algorithmic society, all signs now suggest we are leaving traditional brick-and-mortar establishments behind for a new breed of data-driven ones. Discrimination, in other words, is going digital. And when it does, it will manifest itself—almost by definition—at a macroscopic scale. Why does this matter? Because not all of our civil rights laws cognize statistically-based discrimination claims. And as it so happens, Title II could be among them.

This piece discusses the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven establishments with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of “disparate effect” claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses an existential threat to the statute but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. It is the algorithms underlying “platform economy businesses” that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. Second, this essay joins others in calling for policy reforms focused on modernizing our civil rights canon. It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update. If it is to remain relevant in the years and decades ahead, Title II must become Title 2.0.


Introduction

For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics.

—Oliver Wendell Holmes, Jr. 1 1. Oliver Wendell Holmes, The Path of the Law, 10 Harv. L. Rev. 457, 469 (1897). ×

The future is already here—it is just unevenly distributed.

—William Gibson 2 2. As quoted in Peering round the corner, The Economist, Oct. 11, 2001, https://www.economist.com/special-report/2001/10/11/peering-round-the-corner. ×

It took just four days after Rosa Parks’ arrest to mount a response. Jo Ann Robinson, E.D. Nixon, Ralph Abernathy, and a little-known pastor named Martin King, Jr. would head a coalition of activists boycotting Montgomery, Alabama’s public buses. 3 3. Jack M. Bloom, Class, Race, and the Civil Rights Movement 140 (Ind. U. Press ed. 1987). × Leaders announced the plan the next day, expecting something like a 60% turnout. 4 4. Id. × But to their surprise, more than 90% of the city’s black ridership joined. The total exceeded 40,000 individuals. 5 5. See History.com Editors, How the Montgomery Bus Boycott Accelerated the Civil Rights Movement, History Channel (Feb. 3, 2010), https://www.history.com/topics/black-history/montgomery-bus-boycott. ×

Sheer numbers—they quickly realized—meant that relying on taxis as their sole means of vehicular transport would be impossible. Instead, they got creative. The coalition organized an elaborate system of carpools and cabbies that managed to charge rates comparable to Montgomery’s own municipal system. 6 6. Id. × And so it was that America’s first ridesharing network was born. 7 7. More precisely, the first large-scale ridesharing network making use of automobiles. ×

Fast forward some sixty years to the present and the connections between transportation, innovation, and civil rights are again on full display. Nowadays, the networking system pioneered by Montgomery’s protestors is among the hottest tickets in tech. Newly minted startups launching “ridesharing platforms,” “carsourcing software,” “delivery sharing networks,” “bikesharing” offerings, “carpooling apps,” and “scooter sharing” schemes are a seemingly daily fixture of the news. And just as was true during the Civil Rights Movement, discrimination continues to be a hot-button issue.

Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combatting discriminatory barriers to transportation that stubbornly persist in modern America. 8 8. See infra Part I. × But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in the likes of Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery. 9 9. See infra Part I(A). × The weight of the evidence suggests a cautionary tale: The same technologies capable of combatting modern discrimination also appear capable of producing it.

Surveying the methodologies employed by these reports reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical—not individual—scale. 10 10. See infra Part I(A). ×

As a structural matter, this isn’t coincidental. Uber, Amazon, and a host of other technology leaders have transformed traditional brick-and-mortar business models into data-driven ones fit for the digital age. Yet in doing so, they’ve also taken much discretion out of the hands of individual decision-makers and put it into hands of algorithms. 11 11. See infra Part II(D). × This transfer holds genuine promise of alleviating the kinds of overt prejudice familiar to Rosa Parks and her fellow activists. But is also means that when discrimination does occur, it will manifest—almost by definition—at a statistical scale.

This piece discusses the implications of this fast-approaching reality for one of our most canonical civil rights statutes, Title II of the Civil Rights Act of 1964. 12 12. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a (2018). × Today, a tentative consensus holds that certain of our civil rights laws recognize claims of “discriminatory effect” based in statistical evidence. But Title II is not among them. 13 13. See infra Part II(B). Major courts have recently taken up the issue tangentially, but uncertainty still reigns. × Indeed, more than a quarter century after its passage, it remains genuinely unclear whether the statute encompasses disparate effect claims at all.

This essay explores the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data-driven companies with increasing regularity. Its goals are twofold. First, it seeks to build upon adjacent scholarship 14 14. Of particular note is a groundbreaking piece by Nancy Leong and Aaron Belzer, The New Public Accommodations: Race Discrimination in the Platform Economy, 105 Geo. L. J. 1271 (2017). × by fleshing out the specific structural features of emerging business models that will make Title II’s cognizance of disparate effect claims so urgent. In doing so, it argues that it is not the “platform economy,” per se, that poses a threat to the civil rights law but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. 15 15. Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501, 509 (1999) (describing “architecture,” “norms,” “law,” and “markets” as the four primary modes of regulation). × It is the algorithms underlying emerging platform economy businesses that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. 16 16. And, needless to say, there will be a great many more companies that operate outside of it. ×

Second, this essay joins other scholars in calling for policy reforms focused on modernizing our civil rights canon. 17 17. See, e.g., Leong & Belzer supra note 14; Andrew Selbst, Disparate Impact in Big Data Policing, 52 Georgia L. Rev. 109 (2017) (discussing disparate impact liability in other civil rights contexts). × It argues that our transition from the “Internet Society” to the “Algorithmic Society” will demand that Title II receive a doctrinal update. 18 18. See Jack Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, 51 U.C. Davis L. Rev. 1149, 1150 (noting that society is entering a new post-internet phase he calls the “Algorithmic Society”). × If the statute is to remain relevant in the years and decades ahead, Title II must become Title 2.0.

I.          The Rise of Data-Driven Transportation

Today, algorithms drive society. They power the apps we use to skirt traffic, the networking systems we use to dispatch mobility services, and even the on-demand delivery providers we use to avoid driving in the first place.

For most Americans, paper atlases have been shrugged. Algorithms, of one variety or another, now govern how we move. And far from being anywhere near “peak” 19 19. Gil Press, A Very Short History of Digitization, Forbes (Dec. 27, 2015), https://www.forbes.com/sites/gilpress/2015/12/27/a-very-short-history-of-digitization/#1560b2bb49ac (describing digitization technologies in terms of “peak” adoption). × levels of digitization, society’s embrace of algorithms only appears to be gaining steam. With announcements of new autonomous and connected technologies now a daily fixture of the media, all signs suggest that we’re at the beginning of a long road to algorithmic ubiquity. Data-driven transportation might rightly be described as pervasive today. But tomorrow, it is poised to become the de facto means by which people, goods, and services get from Point A to B.

Many have high hopes for this high-tech future, particularly when it comes to combatting longstanding issues of discrimination in transportation. Observers have hailed the likes of Uber and Lyft as finally allowing “African American customers [to] catch a drama-free lift from point A to point B.” 20 20. E.g., Latoya Peterson, Uber’s Convenient Racial Politics, Splinter News (Jul. 23, 2015), https://splinternews.com/ubers-convenient-racial-politics-1793849400. × They’ve championed low-cost delivery services, such as Amazon and Grubhub, as providing viable alternatives to transit for individuals with disabilities. 21 21. See, e.g., Winnie Sun, Why What Amazon Has Done For Medicaid And Low-Income Americans Matters, Forbes (Mar. 7, 2018), https://www.forbes.com/sites/winniesun/2018/03/07/why-what-amazon-has-done-for-medicaid-and-low-income-americans-matters/#7dbe2ff1ac76; Paige Wyatt, Amazon Offers Discounted Prime Membership to Medicaid Recipients, The Mighty (Mar. 9, 2018), https://themighty.com/2018/03/amazon-prime-discount-medicaid/. × And they’ve even praised navigation apps, like Waze, for bursting drivers’ “very white, very male, very middle-to-upper class” bubbles. 22 22. E.g., Mike Eynon, How Using Waze Unmasked My Privilege, Medium (Oct. 2, 2015), https://medium.com/diversify-tech/how-using-waze-unmasked-my-privilege-26 355a84fe05. × It is through algorithmic transportation, in other words, that we’re beginning to glimpse a more equitable America—with our mobility systems finally exorcised of the types of discrimination that stubbornly persist today, some fifty years after the passage of modern civil rights legislation.

A.         Out With the Old Bias, In With the New?

As with seemingly all significant technological breakthroughs, however, algorithmic transportation also gives rise to new challenges. And discrimination is no exception. Already, multiple studies have revealed the potential for racial bias to infiltrate the likes of Uber, Lyft, Grubhub, and Amazon. 23 23. See infra notes 14 – 28. See also, e.g., Jacob Thebault-Spieker et al., Towards a Geographic Understanding of the Sharing Economy: Systemic Biases in UberX and TaskRabbit, 21 ACM Transactions on Computer-Human Interaction (2017). × The National Bureau of Economic Research’s (“NBER”) groundbreaking study revealing a pattern of racial discrimination in Uber and Lyft services is one such exemplar. 24 24. Yanbo Ge, et al., Racial and Gender Discrimination in Transportation Network Companies, (2016), http://www.nber.org/papers/w22776. × After deploying test subjects on nearly 1,500 trips, researchers found that black riders 25 25. Or riders with black-sounding names. × experienced significantly higher wait times and trip cancellations than their white counterparts.

The NBER’s piece was preceded—months earlier—by a similarly provocative report from Jennifer Stark and Nicholas Diakopoulus. 26 26. See Jennifer Stark & Nicholas Diakopoulus, Uber Seems to Offer Better Service in Areas With More White People. That Raises Some Tough Questions., Wash. Post (Mar. 10, 2016), https://www.washingtonpost.com/news/wonk/wp/2016/03/10/uber-seems-to-offer-better-service-in-areas-with-more-white-people-that-raises-some-tough-questions/. × Using a month’s worth of Uber API data, the scholars found a statistical correlation between passenger wait times and neighborhood demographic makeup. The upshot? That Uber’s patented “surge pricing algorithm” resulted in disproportionately longer wait times for people of color, even after controlling for factors such as income, poverty, and population density.

Another example comes from Bloomberg, which reported in 2017 that Amazon’s expedited delivery services tended to bypass areas composed of predominantly black residents. 27 27. See David Ingold & Spencer Soper, Amazon Doesn’t Consider the Race of Its Customers. Should It?, Bloomberg (Apr. 21, 2016), https://www.bloomberg.com/graphics/2016-amazon-same-day/. × Bloomberg’s findings were subsequently buttressed by a Washington Post piece revealing that the “delivery zones” of services such as Grubhub, Door Dash, Amazon Restaurants, and Caviar appeared highly limited in low-income, minority-majority areas. 28 28. Tim Carman, D.C. has never had more food delivery options. Unless you live across the Anacostia River., Wash. Post (Apr. 2, 2018), https://www.washingtonpost.com/news/food/wp/2018/04/02/dc-has-never-had-more-food-delivery-options-unless-you-live-across-the-anacostia-river/?utm_term=.dead0dca9e8a. ×

B.         Discrimination’s Digital Architecture

While the patterns and practices uncovered by these reports vary dramatically, they share one commonality whose importance cannot be overstated. Each of them measures racial bias at a statistical—not individual—scale.

As a structural matter, this observation is in some sense unavoidable. When discrimination occurs in traditional brick-and-mortar contexts, it generally does so out in the open. It is difficult to turn someone away from Starbucks, 29 29. This example is pulled from an all-too-recent headline. See Rachel Adams, Starbucks to Close 8,000 U.S. Stores for Racial-Bias Training After Arrests, N.Y. Times (Apr. 17, 2018), https://www.nytimes.com/2018/04/17/business/starbucks-arrests-racial-bias.html. × after all, without them being made aware of the denial, even if the precise rationale is not clear.

But as the means by which Americans secure their transportation, food, and lodging goes increasingly digital, the “architecture” 30 30. See Lessig, supra note 15. × of discrimination will take on a different face. Our interactions with cab companies, public transportation providers, and delivery services will be mediated by algorithms that we neither see nor necessarily understand. And face-to-face interactions with service providers, meanwhile, will become a thing of the past.

In countless respects, this transition is cause for celebration. A society driven by algorithms is one that holds genuine hope of eliminating the types of overt discrimination that drove civil rights reforms of past eras. But in its stead, an emerging body of evidence suggests that subtler forms of discrimination may persist—ones that could challenge the doctrinal foundations on which our civil rights laws currently rest.

II.         When Blackletter Civil Rights Law Isn’t Black and White

When it comes to holding private entities that provide our transportation, food, and lodging accountable for racial discrimination, the usual suspect is Title II of the Civil Rights Act. Title II sets forth the basic guarantee that “[a]ll persons [are] entitled to the full and equal enjoyment of the goods, services, facilities, privileges, advantages, and accommodations of any place of public accommodation. . . without discrimination or segregation on the ground of race, color, religion, or national origin.” 31 31. Civil Rights Act of 1964, 42 U.S.C. § 2000a(a) (2018). × The statute defines “public accommodation” broadly as essentially any “establishment affecting interstate commerce.” 32 32. See id (with the exception of a few carve outs—private clubs being one such example). ×

Pursuing a Title II claim requires, first, establishing a prima facie case of discrimination. To do so, claimants must show they: (1) are members of a protected class; (2) were denied the full benefits of a public accommodation; and (3) were treated less favorably than others 33 33. Id (specifically, “. . . treated less favorably than others outside of the protected class” who are similarly situated). × outside of the protected class. 34 34. Having established a prima facie case, the burden of persuasion then shifts to the defendant. For simplicity’s sake, this piece strictly analyzes prima facie claims and does not delve into the complexities of burden shifting and justifying legitimate business decisions under modern antidiscrimination law. ×

A.         The Intent Requirement and the Man of Statistics

At first blush, establishing these prima facie elements using the types of evidence documented by the reports noted in Part I(A) may seem straightforward. But there’s just one tiny detail standing in the way. As it turns out, no one knows whether Title II actually prohibits the kinds of racial disparities uncovered by the studies.

Not all civil rights laws, after all, allow claimants to use statistically-disparate impacts as evidence of discrimination. Title VI, for example, does not, whereas Title VII does.

This distinction owes, in large part, to the antidiscrimination canon’s “intent requirement,” which draws a doctrinal dividing line between acts exhibiting “discriminatory intent” and those, instead, exhibiting “discriminatory effects.” 35 35. See Implementation of the Fair Housing Act’s Discriminatory Effects Standard, 78 Fed. Reg. 11,460 (Feb. 15, 2013) (codified at 24 C.F.R. § 100.500(1) (2014)). × To oversimplify, acts of intent can be understood as overt, “invidious acts of prejudiced decision-making.” 36 36. Susan Carle, A New Look at the History of Title VII Disparate Impact Doctrine, 63 Flo. L. Rev. 251, 258 (2011). × Acts of effect, meanwhile, are those that “actually or predictably . . . result[] in a disparate impact on a group of persons” even when the explicit intent behind them is not discriminatory. 37 37. See Implementation of the Fair Housing Act’s Discriminatory Effects Standard, supra note 35. ×

Ask Rosa Parks to give up her seat for a white passenger? The civil rights claim filed in response will likely take a narrow view of the interaction, examining the discrete intent behind it. Systematically route buses in such a way that they bypass Rosa Parks altogether? Under the right circumstances, this could be evidence of discrimination equally as troubling as in the former scenario. But the civil rights claim it gave rise to would likely entail a far wider view of the world—one that couched its arguments in statistics. 38 38. Title VII offers plaintiffs a “disparate impact” framework under which they may prove unlawful discrimination alongside the more traditional “disparate treatment” model. 42 U.S.C. § 2000e-2(k)(l)(A) (1994). ×

Today, a tentative consensus holds that theories involving discriminatory effects are available under the Fair Housing Act, the Age Discrimination and Employment Act, certain Titles of the Americans With Disabilities Act, and Title VII of the Civil Rights Act. When it comes to Title II, however, the jury is still out. Neither the Supreme Court, a major circuit court, nor a federal administrative body has resolved the issue to date, and “there is a paucity of cases analyzing it.” 39 39. Hardie v. Nat’l Collegiate Athletic Ass’n, 97 F. Supp. 3d 1163, 1163 (S.D. Cal. 2015), aff’d, 861 F.3d 875 (9th Cir. 2017), and superseded by, 876 F.3d 312 (9th Cir. 2017). ×

B.         Hardie’s Open Question

Uncertainties surrounding Title II’s scope most recently came to a head in Hardie v. NCAA. The case involved a challenge to the collegiate association’s policy of banning convicted felons from coaching certain tournaments. The plaintiff, Dominic Hardie, alleged that the policy disparately impacted blacks, putting the question of Title II’s “discriminatory effect” liability at center stage.

The court of first impression ruled against Hardie, finding that Title II did not cognize such claims. But on appeal, the case’s focal point changed dramatically. In a surprise turn of events, the NCAA abandoned its structural argument against disparate impact liability outright. Instead, it conceded that Title II did, in fact, recognize statistical effects but asserted that the NCAA’s policy was, nonetheless, not a violation. 40 40. See id. (“On appeal, the NCAA does not challenge Hardie’s argument that Title II encompasses disparate-impact claims. . . . Instead, the NCAA asks us to affirm entry of summary judgment in its favor on either of two other grounds advanced below, assuming arguendo that disparate-impact claims are cognizable under Title II.”). ×

Thus, when the case came before the 9th Circuit, the question of whether Title II encompassed discriminatory effects was, essentially, rendered moot. The court ruled in favor of the NCAA’s narrower argument but went out of its way to emphasize that it had not decided the question of discriminatory effect liability. And no other major appeals court has addressed the issue since.

C.         Title II’s Fair Housing Act Moment

It was not long ago that another civil rights centerpiece—the Fair Housing Act of 1968 (FHA)—found itself at a similar crossroads. The FHA makes it illegal to deny someone housing based on race. But a half century after the statute’s passage, the question of whether it prohibited disparate effects had not been tested in our highest court.

By 2015, the Supreme Court had twice taken up the issue in two years. 41 41. See Gallagher v. Magner, 619 F.3d 823, (8th Cir. 2010), cert. dismissed, 565 U.S. 1187, 132 S.Ct. 1306 (2012); Mt. Holly Gardens Citizens in Action, Inc. v. Twp. of Mt. Holly, 658 F.3d 375 (3rd Cir. 2011), cert. dismissed, 571 U.S. 1020, 134 S.Ct. 636 (2013). × And twice, the cases had settled in advance of a ruling.

Then came Texas Department of Housing and Community Affairs v. The Inclusive Communities Project, alleging that a state agency’s allocation of tax credits disparately impacted the housing options of low-income families of color. 42 42. Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 135 S. Ct. 2507, 2514 (2015) [hereinafter “Inclusive Communities”]. × This time, there was no settlement. And the ruling that followed was subsequently described as the “most important decision on fair housing in a generation.” 43 43. Kristen Capps, With Justice Kennedy’s Retirement, Fair Housing Is in Peril, Citylab (Jun. 28, 2018), https://www.citylab.com/equity/2018/06/what-justice-kennedys-retirement-means-for-fair-housing/563924/. ×

Writing for the 5-4 majority, Justice Kennedy affirmed that the FHA extended to claims of both discriminatory intent and effect. 44 44. But his ruling, according to some commenters, took a troublingly narrow view of viable disparate impact claims. × Kennedy was careful to note that the FHA’s passage occurred at a time when explicitly racist policies—such as zoning laws, racial covenants, and redlining—were the norm. But the Justice, nonetheless, stressed that more modern claims alleging racially disparate impacts were also “consistent with the FHA’s central purpose.” 45 45. See Inclusive Communities supra note 42. ×

D.        The New Back of the Bus

Much like the FHA, Title II arrived on the scene when discriminatory effect claims were far from the leading concern among civil rights activists. As Richard Epstein writes:

“Title II was passed when memories were still fresh of the many indignities that had been inflicted on African American citizens on a routine basis. It took little imagination to understand that something was deeply wrong with a nation in which it was difficult, if not impossible, for African American citizens to secure food, transportation, and lodging when traveling from place to place in large sections of the country. In some instances, no such facilities were available, and in other cases they were only available on limited and unequal terms.” 46 46. Richard A. Epstein, Public Accommodations Under the Civil Rights Act of 1964: Why Freedom of Association Counts as a Human Right, 66 Stan. L. Rev. 1241, 1242 (2014). ×

The paradigmatic act of discrimination, in other words, was intentional, overt, and explicitly racial.

Today, however, we are heading toward a world in which this paradigm is apt to turn on its head. Gone will be the days of racially explicit denials of service such as the well-documented phenomena of “hailing a cab while black,” “dining while black,” “driving while black,” or “shopping while black.” 47 47. See, e.g., Matthew Yglesias, Uber and Taxi Racism, Slate (Nov. 28, 2012), http://www.slate.com/blogs/moneybox/2012/11/28/uber_makes_cabbing_while_black_easier.html; Danielle Dirks & Stephen K. Rice in Race and Ethnicity: Across Time, Space, and Discipline 259 (Rodney Coates ed., 2004). × But as an increasing body of evidence suggests, inequality will not simply disappear as a consequence. Rather, discrimination will go digital. And when it does occur, it will likely manifest not as a discrete act of individual intent but instead as a statistically disparate effect.

With this future in view, forecasting the consequences for Title II requires little speculation. Absent the ability to bring statistically-based claims against tomorrow’s data-driven establishments, Title II could be rendered irrelevant. 48 48. In effect, this means that the greatest threat to the statute may not be the doctrinal uncertainty posed by “platform economy businesses,” per se. Instead, it could be the algorithmic “architecture” that drives such companies, regardless of whether they adopt a “platform” business model. ×

If America is to deliver on its guarantee of equal access to public accommodations, its civil rights laws must reach the data-driven delivery services, transportation providers, and logistics operators that increasingly move our society. 49 49. No matter one’s ideological view, the dismantling of legislation through mere technological obsolescence would be a troubling outcome. × Failing to do so simply because these business models were not the norm at the time of the statute’s passage could lead to tragic results. As Oliver Wendell Holmes, Jr. wrote more than a century ago:

“It is revolting to have no better reason for a rule of law than that it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past.” 50 50. See Holmes supra note 1 at 469. ×

To save one of our antidiscrimination canon’s most iconic statutes from such a fate, all signs now suggest it will need a doctrinal update. Title II, in software parlance, must become Title 2.0.

III.       A Policy Roadmap for Title 2.0

With the foregoing analysis in our rearview mirror, it is now possible to explore the road ahead. The policy challenges of applying Title II to a data-driven society appear to be at least threefold. Policymakers should establish: (1) whether Title II cognizes statistically-based claims; (2) what modern entities are covered by Title II; and (3) what oversight mechanisms are necessary to detect discrimination by such entities? The following sections discuss these three challenges, as well as the steps policymakers can take to address them through judicial, legislative, or regulatory reform.

A.         Statistically-based Claims in a Data-Driven Society

The first, and most obvious, policy reform entails simply clarifying Title II’s cognizance of statistically based claims. Such clarification could come at the judicial or regulatory level, as occurred with the FHA. Or it could come at the legislative level, as occurred with Title VII.

Though the question of whether litigants can sustain statistical claims under Title II may seem like an all-or-nothing proposition, recent experience shows this isn’t actually true. Short of directly translating Title VII theories to Title II, there exist numerous alternatives. Justice Kennedy himself noted as much in Inclusive Communities when he remarked that “the Title VII framework may not transfer exactly to [all other] context[s].” 51 51. See Inclusive Communities, supra note 42. ×

Nancy Leong and Aaron Belzer convincingly argue that one framing might involve adopting a modern take on discriminatory intent claims. The scholars assert that even if intent is deemed essential under Title II, statistically based claims could nevertheless satisfy the requirement. 52 52. See Leong & Belzer supra note 14, at 1313. × In their telling, the intent requirement could manifest through a company’s “decision to continue using a platform design or rating system despite having compelling evidence that the system results in racially disparate treatment of customers.” 53 53. See id. × Under this view, the claim would then be distinguishable from unintentional claims because “once the aggregated data is known to reflect bias and result in discrimination,” its continued use would constitute evidence of intent. 54 54. See id. Indeed, this argument may become especially compelling in a world where improved digital analytics enable much more customized targeting of individuals or traits. With more fine-grained control over data-driven algorithms, it may become much more difficult to justify the use of those that appear to perpetuate bias against protected groups. ×

Not only would this approach countenance Kennedy’s admonition in Inclusive Communities “that disparate-impact liability [be] properly limited,” 55 55. See Inclusive Communities, supra note 42. × it may also offer an elegant means of addressing the concerns raised by dissenting opinions that Title II claims demonstrate a defendant’s discriminatory “intent.” 56 56. See, e.g. id. (Justice Alito’s dissent highlighted Title II’s “because of” language). × Policymakers should, therefore, take this line of analysis into consideration when clarifying Title II’s scope.

B.         Public Accommodations in a Data-Driven Society

Although this essay has thus far presumed that large-scale algorithmic transportation services like Uber and Amazon are covered by Title II, even that conclusion remains unclear. As enacted, Title II is actually silent as to whether it covers conventional cabs, much less emerging algorithmic transportation models. 57 57. See, e.g., Bryan Casey, Uber’s Dilemma: How the ADA Could End the On Demand Economy, 12 U. Mass. L. Rev. 124, 134 (citing Ramos v. Uber Techs., Inc., No. SA-14-CA-502-XR, 2015 WL 758087, at *11 (W.D. Tex. Feb. 20, 2015)). × A second policy reform, therefore, would entail clarifying whether Title II actually covers such entities in the first place.

Here, understanding the origins of the Civil Rights Act of 1964 is again useful. The statute lists several examples of public accommodations that were typical of America circa 1960. 58 58. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a(b) (2018). × Some courts have suggested that this list is more or less exhaustive. 59 59. See Leong & Belzer supra note 14, at 1296. × But that view is inconsistent with the law’s own language. 60 60. Civil Rights Act of 1964, tit. II, 42 U.S.C. § 2000a(a) (2018) (prohibiting discrimination in “establishment[s] affecting interstate commerce”). × And numerous others have taken a broader view of the term “public accommodations,” which extends to entities that were not necessarily foreseen by the statute’s original drafters. 61 61. See, e.g., Miller v. Amusement Enters., Inc., 394 F.2d 342, 349 (5th Cir. 1968) (“Title II of the Civil Rights Act is to be liberally construed and broadly read.”). ×

Policymakers in search of analogous interpretations of public accommodations laws need look no further than the Americans With Disabilities Act (ADA). Like Title II, the ADA covers places of public accommodation. And, again like Title II, its drafters listed specific entities as examples—all of which were the types of brick-and-mortar establishments characteristic of the time. But in the decades since its passage, the ADA’s definition has managed to keep pace with our increasingly digital world. Multiple courts have extended the statute’s reach to distinctly digital establishments, including popular websites and video streaming providers. 62 62. See Nat’l Ass’n of the Deaf v. Netfix, Inc., 869 F Supp. 2d 196, 200-02 (D. Mass. 2012) (holding the video streaming service constitutes a “public accommodation” even if it lacks a physical nexus); National Federation of the Blind v. Scribd Inc., 97 F. Supp. 3d 565, 576 (D. Vt. 2015) (holding that an online repository constitute a “public accommodation” for the purpose of the ADA). But see Tara E. Thompson, Comment, Locating Discrimination: Interactive Web Sites as Public Accommodations Under Title II of the Civil Rights Act, 2002 U. Chi. Legal F. 409, 412 (“The courts, however, have not reached a consensus as to under what circumstances ‘non-physical’ establishments can be Title II public accommodations.”); Noah v. AOL Time Warner Inc., 261 F Supp. 2d 532, 543-44 (E.D. Va. 2003) (holding that online chatroom was not a “public accommodation” under Title II). ×

Policymakers should note, however, that Uber and Lyft have fiercely resisted categorization as public accommodations. 63 63. See Casey, supra note 57. The Department of Justice and numerous courts have expressed skepticism of this view. But, to date, there has been no definitive answer to this question—due in part to the tendency of lawsuits against Uber and Lyft to settle in advance of formal rulings. × In response to numerous suits filed against them, the companies have insisted they are merely “platforms” or “marketplaces” connecting sellers and buyers of particular services. 64 64. See id. × As recently as 2015, this defense was at least plausible. And numerous scholars have discussed the doctrinal challenges of applying antidiscrimination laws to these types of businesses. 65 65. See generally id.; Leong & Belzer supra note 14. × But increasingly, companies like Uber, Lyft, and Amazon are shifting away from passive “platform” or “marketplace” models into more active service provider roles. 66 66. See Bryan Casey, A Loophole Large Enough to Drive an Autonomous Vehicle Through: The ADA’s “New Van” Provision and the Future of Access to Transportation, Stan. L. Rev. Online (Dec. 2016), https://www.stanfordlawreview.org/online/loophole-large-enough/ (describing Uber’s and Lyft’s efforts to deploy autonomous taxi fleets). Other platform companies in different sectors are acting similarly. See, e.g., Katie Burke, Airbnb Proposes New Perk For Hosts: A Stake in The Company, San Francisco Bus. Times (Sept. 21, 2018), https://www.bizjournals.com/sanfrancisco/news/2018/09/21/airbnb-hosts-ipo-sec-equity.html. × All three, for example, now deploy transportation services directly. And a slew of similarly situated companies appear poised to replicate this model. 67 67. See Casey, supra note 66(noting the ambitions of Tesla, Google, and a host of others to deploy similar autonomous taxi models). × For most such companies, passive descriptors like “platform” or “marketplace” are no longer applicable. Our laws should categorize them accordingly.

C.         Oversight in a Data-Driven Society

Finally, regulators should consider implementing oversight mechanisms that allow third parties to engage with the data necessary to measure and detect discrimination. In an era of big data and even bigger trade secrets, this is of paramount importance. Because companies retain almost exclusive control over their proprietary software and its resultant data, barriers to accessing the information necessary even to detect algorithmic impacts often can be insurmountable. And the ensuing asymmetries can render discrimination or bias effectively invisible to outsiders.

Another benefit of oversight mechanisms is their ability to promote good corporate governance without the overhead of more intrusive command-and-control regulations. Alongside transparency, after all, comes the potential for extralegal forces such as ethical consumerism, corporate social responsibility, perception bias, and reputational costs to play meaningful roles in checking potentially negative behaviors. 68 68. See Bryan Casey, Amoral Machines; Or, How Roboticists Can Learn to Stop Worrying and Love the Law, 111 Nw. U. L. Rev. Onlineat 1358. There was, for example, a happy ending to the recent revelations regarding racial disparities in Amazon delivery services. See Spencer Soper, Amazon to Fill All Racial Gaps in Same-Day Delivery Service, Bloomberg (May 6, 2016), https://www.bloomberg.com/news/articles/2016-05-06/amazon-to-fill-racial-gaps-in-same-day-delivery-after-complaints. × By pricing externalities through the threat of public or regulatory backlash, these and other market forces can help to regulate sectors undergoing periods of rapid disruption with less risk of chilling innovation than traditional regulation. 69 69. As importantly, this encourages proactive antidiscrimination efforts as opposed to retroactive ones. See Mark Lemley & Bryan Casey, Remedies for Robots, U. Chi. L. Rev. (forthcoming 2019). Without meaningful oversight, the primary risk is not that industry will intentionally build discriminatory systems but that “[biased] effects [will] simply happen, without public understanding or deliberation, led by technology companies and governments that are yet to understand the broader implications of their technologies once they are released into complex social systems.” See Alex Campolo et. al, AI Now 2017 Report (2017). ×

Some scholars have proposed federal reforms—akin to those put forward by the Equal Employment Opportunity Commission, 70 70. 29 C.F.R. § 1602.7 (1991). × the Department of Housing and Urban Development, 71 71. 24 C.F.R. §§ 1.6, 1.8 (1996). × and the Department of Education 72 72. 34 C.F.R. § 100.6 (1988). × —as a means of implementing oversight mechanisms for Title II. 73 73. See Leong and Belzer supra note 14. × But state-level action, in this instance, may be more effective. A multi-fronted push that is national in scope provides a higher likelihood of successful reform. And much like the “Brussels Effect” documented at an international level, intra-territorial policies imposed on inter-territorial entities can have extra-territorial effects within the U.S. 74 74. See Anu Bradford, The Brussels Effect, 107 Nw. U. L. Rev. 1 (2012). × As the saying goes: “As goes California, so goes the nation.” 75 75. This saying is equally applicable to numerous other populous states. ×

As a parting note, it cannot be stressed enough that mere “disclosure” mechanisms are not necessarily enough. 76 76. See Andrew Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, Fordham L. Rev. (forthcoming 2019). × For oversight to be meaningful, it must be actionable—or, in Deirdre Mulligan’s phrasing, “contestable.” 77 77. Dierdre Mulligan, et al., Privacy is an Essentially Contested Concept: A Multi-Dimensional Analytic For Mapping Privacy, 374 Phil. Trans. R. Soc. 1, 3 (2016), https://www.law.berkeley.edu/wp-content/uploads/2017/07/Privacy-is-an-essentially.pdf. × That is, it must allow downstream users to “contest[] what the ideal really is.” 78 78. See id. × Moreover, if oversight is to be accomplished through specific administrative bodies, policymakers must ensure that those bodies have the technical know how and financial resources available to promote public accountability, transparency, and stakeholder participation. Numerous scholars have explored these concerns at length, and regulators would do well to consider their insights.

Conclusion

Following any major technological disruption, scholars, industry leaders, and policymakers must consider the challenges it poses to our existing systems of governance. Will the technology meld? Must our policies change?

Algorithmic transportation is no exception. This piece examines its implications for one of America’s most iconic statutes: Title II of the Civil Rights Act of 1964. As algorithms expand into a vast array of transportation contexts, they will increasingly test the doctrinal foundations of this canonical law. And without meaningful intervention, Title II could soon find itself at risk of irrelevance.

But unlike policy responses to technological breakthroughs of the past, those we have seen so far offer genuine hope of timely reform. As Ryan Calo notes, unlike a host of other transformative technologies that escaped policymakers’ attention until too late, this new breed “has managed to capture [their] attention early [] in its life-cycle.”

Can this attention be channeled in directions that ensure that our most important civil rights laws keep pace with innovation? That question, it now appears, should be on the forefront of our policy agenda.


Legal Fellow, Center for Automotive Research at Stanford (CARS); Affiliate Scholar of CodeX: The Center for Legal Informatics at Stanford and the Stanford Machine Learning Group. The author particularly thanks Chris Gerdes, Stephen Zoepf, Rabia Belt, and the Center for Automotive Research at Stanford (CARS) for their generous support.

The common story of automated vehicle safety is that by eliminating human error from the driving equation, cars will act more predictably, fewer crashes will occur, and lives will be saved. That future is still uncertain though. Questions still remain about whether CAVs will truly be safer drivers than humans in practice, and for whom they will be safer. In the remainder of this post, I will address this “for whom” question.

A recent study from Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern at Georgia Tech found that state-of-the-art object detection systems – the type used in autonomous vehicles – demonstrate higher error rates in detection of darker-skinned pedestrians as compared to lighter-skinned pedestrians. Controlling for things like time-of-day or obstructed views, the technology was five percentage points less accurate at detecting people with darker skin-tones.

The Georgia Tech study is far from the first report of algorithmic bias. In 2015, Google found itself at the center of controversy when its algorithm for Google Photos incorrectly classified some black people as gorillas. More than two years later, Google’s temporary fix of removing the label “gorilla” from the program entirely was still in place. The company says they are working on a long-term fix to their facial recognition software. However, the continued presence of the temporary solution several years after the initial firestorm is some indication either of the difficulty of achieving a real solution or the lack of any serious coordinated response across the tech industry.

Algorithmic bias is a serious problem that must be tackled with a serious investment of resources across the industry. In the case of autonomous vehicles, the problem could be literally life and death. The potential for bias in automated systems begs for an answer to serious moral and legal questions. If a car is safer overall, but more likely to run over a black or brown pedestrian than a white one, should that car be allowed on the road? What is the safety baseline against which such a vehicle should be judged? Is the standard, “The AV should be just as likely (hopefully not very likely) to hit any given pedestrian?” Or is it “The AV should hit any given pedestrian less often than a human driven vehicle would?” Given our knowledge of algorithmic bias, should an automaker be opened up to more damages if their vehicle hits a black or brown pedestrian than when it hits a white pedestrian? Do tort law claims, like design defect or negligence, provide adequate incentive for automakers to address algorithmic bias in their systems? Or should the government set up a uniform system of regulation and testing around the detection of algorithmic bias in autonomous vehicles and other advanced, potentially dangerous technologies?

These are questions that I cannot answer today. But as the Georgia Tech study and the Google Photos scandal demonstrate, they are questions that the AV industry, government, and society as a whole will need to address in the coming years.

Americans have traditionally had an understandable skepticism towards government collection of our data and monitoring of our private communications. The uproar caused by the Snowden leaks in 2013 was followed by increased public attention to data privacy. In a 2014-15 survey, 57% of respondents said that government monitoring of the communications of US citizens was unacceptable. Over 90% of respondents found it important to be able to personally control what data about them was shared, and with whom. The public has expressed similar concerns about data-sharing among private companies. Nearly 2/3 of Americans say that current laws do not go far enough to protect their privacy, and would support increased regulation of advertisers.

Limitations on government collection of private data are built into the Fourth Amendment, as applied to collection of digital data in Carpenter. But there is no analogous limitation on the ability of corporations to share our data far and wide, as anyone who has seen a targeted Facebook ad pop up minutes after searching for an item on Amazon knows. Indeed, First Amendment cases such as Sorrell v. IMS Health, in bolstering protections for commercial speech, may significantly restrict the ability of Congress to regulate private companies selling our data amongst themselves. While many targeted ads can make data sharing seem harmless (I see you just bought a watch. Perhaps I can interest you in these 73 similar watches?), at times it may be more nefarious. 

Public unease with data sharing may be especially warranted in the case of mobility data. The majority of Americans move about the world in cars. While many of those trips are innocuous, some may be trips to an unpopular church, to the home of a secret paramour, or to the scene of a crime. Even the innocuous trips may be simply embarrassing (maybe you ate at a fast food restaurant a few more times than you should have, or fibbed to your spouse once or twice about working late when you were actually getting an after-work drink with friends). These are the type of excursions that, if your car were continuously collecting data on its whereabouts, could easily be sold to a private actor that would be willing to use it against you.

The concern that a private company could abuse access to your personal data just as easily as the government has led legal scholar Jeffrey Rosen to propose a new Constitutional amendment. Such affronts to dignity, as Rosen describes this all-consuming data collection and sale, are problematic enough that we need an amendment to bar unreasonable searches and seizures by either the government or a private corporation. Mercatus Center Senior Research Fellow Adam Theirer has argued that Rosen’s proposal is ill-advised, but still supports making it easier for consumers to restrict access to their private data.

Under current doctrine, the path to heightened protections from abuse of our personal data by private companies is unclear. In Carpenter, the Court took account of the changing nature of technology to limit the government’s ability to collect our information from corporations under the Fourth Amendment. Going forward, the Court should bear in mind the public’s desire for privacy, and the increasing prominence of data collection companies such as Google, Amazon, and soon, CAV operators. As in Carpenter, they should adjudicate with changing technology in mind, and seek to enable Congress’ ability to legislate limits on the ability of private companies to sell our personal data.

City design has long been shaped by modes of transportation. The transition is easy to spot as you move westward across America. Relatively compact eastern cities initially grew up in the 18th and 19th centuries, when people traveled by foot or by horse. Scattered across the plains, and particularly throughout the vast expanses of Texas and the Southwest, are cities filled with wide thoroughfares and sprawling suburbs, designed to match the rise of car culture. A large-scale shift to autonomous vehicle transportation will once again mold our cities in new ways. I wrote recently about this coming shift, focusing in particular on the reuse of space currently dominated by parking. This post will build on that theme by exploring the ways in which big data generated by new transportation technologies will guide city planners and business strategists in creating new urban environments.

Many cities already take advantage of more traditional forms of transportation data to improve urban planning. For example, analysis of population density and traffic patterns facilitated Moscow’s 50% increase in public transit capacity, which enabled the city to reduce driving lanes in favor of more space for pedestrians and cyclists. Looking to the future, New York University’s Center for Urban Science and Progress seeks to help cities harness the power of big data to “become more productive and livable.” Today, more data exists regarding our transportation habits than ever before. Ride-hailing services such as Uber and Lyft, along with the popularity of “check-in” apps such as Foursquare, have exponentially increased the amount of data collected as we go through our daily routines. The advent of CAVs, along with smaller scale technologies such as bike-share and scooter-share programs, will only accelerate this trend.

Currently, most of this data is collected and held by private companies. This valuable information is already being aggregated and used by companies such as Sasaki, a design firm that uses data from Yelp, Google, and others to help businesses and developers understand how their planned projects can best fit in with a community’s existing living patterns. The information is able to help businesses understand, on a block-by-block basis, where their target market lives, shops, and travels. As companies such as Uber and Waymo roll out fleets of autonomous vehicles in the coming years that collect data on more and more people, such information will increasingly drive business planning.

Just as this wealth of data is impacting business decisions, making it available to the public sector would mark a significant upgrade in the capabilities of urban planners. To be sure, granting the government easy access to such fine-grained information about our daily lives comes with its own set of challenges, which my colleague Ian Williams has explored in a previous post. From the perspective of planning utility however, the benefits are clear. By better understanding exactly what times and locations present the worst traffic challenges, cities can target infrastructure improvements, tollways, or carpool benefits to alleviate the problem. A more detailed understanding of which routes people take to and from home, work, shopping, and entertainment districts can allow for more efficient zoning and the development of more walkable neighborhoods. This type of improvement has the potential to improve the livability of city centers so as to guard against the danger that CAVs will facilitate a new round of exurban flight.

As with previous shifts in transportation, the widespread move to CAVs expected in the coming years will be a key driver of the future shape of our cities. Urban planners and business strategists will play a featured role in determining whether this technology ushers in a new round of sprawl, or facilitates the growth and attractiveness of metropolitan centers. The intelligent and conscientious use of data generated by CAVs and other emerging technologies can help fuel smart development to ensure that our downtown spaces, and the communities they support, continue to thrive.

 

Two recent news stories build interestingly on my recent blog post about CAVs and privacy. The first, from Forbes, detailing law enforcement use of “reverse location” orders, where by investigators can obtain from Google information on all Google users in a given location at a given time. This would allow, for example, police to obtain data on every Google account user within a mile of a gas station when it was robbed. Similar orders have been used to obtain data from Facebook and Snapchat.

Look forward a few years and it’s not hard to imagine similar orders being sent to the operators of CAVs, to obtain the data of untold numbers of users at the time of a crime. The problem here is that such orders can cast far too wide a net and allow law enforcement access to the data of people completely uninvolved with the case being investigated. In one of the cases highlighted by Forbes, the area from which investigators requested data included not only the store that was robbed, but also nearby homes. The same situation could occur with CAVs, pulling in data from passengers completely unrelated to a crime scene who happen to have been driving nearby.

The other story comes from The Verge, which covers data mining done by GM in Los Angeles and Chicago in 2017.  From the article:

GM captured minuted details such as station selection, volume level, and ZIP codes of vehicle owners, and then used the car’s built-in Wi-Fi signal to upload the data to its servers. The goal was to determine the relationship between what drivers listen to and what they buy and then turn around and sell the data to advertisers and radio operators. And it got really specific: GM tracked a driver listening to country music who stopped at a Tim Horton’s restaurant. (No data on that donut order, though.)

That’s an awful lot of information on a person’s daily habits. While many people have become accustomed (or perhaps numb) to the collection of their data online, one wonders how many have given thought to the data collected by their vehicle. The article also points out scale of the data collected by connected cars and what it could be worth on the market:

According to research firm McKinsey, connected cars create up to 600GB of data per day — the equivalent of more than 100 hours of HD video every 60 minutes — and self-driving cars are expected to generate more than 150 times that amount. The value of this data is expected to reach more than $1.5 trillion by the year 2030, McKinsey says.

Obviously, creators and operators of CAVs are going to want to tap into the market for data. But given the push for privacy legislation I highlighted in my last post, they may soon have to contend with limits on just what they can collect.

~ P.S. I can’t resist adding a brief note on some research from my undergraduate alma mater, the University of Illinois. It seems some researchers there are taking inspiration from the eyes of mantis shrimp to improve the capability of CAV cameras.