Cyber-security

Last week I discussed the California Superior Court decision that ruled that under California law Uber and Lyft must classify their ridesharing drivers as employees, rather than independent contractors. In response to that ruling, both companies had threatened to shut down service across the state. Yesterday, an appeals court issued a stay on that ruling, allowing both companies to continue operations, “pending resolution” of their appeal of the initial order. As I mentioned in my last blog post, the rideshare giant’s strategy currently appears to be “run out the clock,” until the November election, when California voters will decide on Proposition 22, which would establish a new classification for drivers. So for now those Californians who are willing to brave getting into a rideshare will be able to do so – while Uber and Lyft also explore more creative solutions, in case Prop 22 doesn’t pass.

Also on Thursday, another court case tied to Uber was just starting. Federal prosecutors in San Francisco filed criminal charges against Uber’s former security chief, Joe Sullivan. Sullivan is charged with two felony counts for failing to disclose a 2016 Uber data breach to federal investigators who were investigating similar earlier incidents that had occurred in 2014. In the 2016 incident, an outside hacker was paid $100,000 by Uber after the hacker revealed they had acquired access to the information of 57 million riders and drivers. Beyond the payment, Uber faced further criticism for failing to reveal the incident for a full year. Two of the hackers involved later plead guilty to charges related to the hack, and they are both awaiting federal sentencing. In 2018 Uber paid $148 million to settle a suit brought by state attorneys general related to the hack, while the FTC expanded a previous data breach settlement in reaction to the incident. Beyond the lack of transparency (to the public and law enforcement) Uber’s major misstep, at least in my view, is the payment itself. While many companies, Uber included, sponsor “bug bounties,” where outside security researchers are rewarded for reporting security flaws in a company’s products, this payment fell outside of that structure. Rather, it seems more like a ransom payment to less than scrupulous hackers. While Uber is far from the only company to have faced data breaches (or to have paid off hackers), this case should be a wake-up call for all mobility companies – a reminder that they have to be very careful with the customer data they are collecting, least they fall prey to a data breach, and, just as importantly, when a breach occurs, they have to face it with transparency, both to the public and investigators.  

The third Uber-related this month involves another former Uber employee, Anthony Levandowski, who was sentenced to 18 months in prison for stealing automated vehicle trade secrets from Google. In 2016, Levandowski left Google’s automated vehicle project to start his own AV tech company, which was in turn acquired by Uber. Levandowski was accused of downloading thousands of Google files related to AVs before he left, leading to a suit between Google’s Waymo and Uber, which was settled for roughly $250 million. There are a lot more details involved in the case, but it highlights some of the many challenges Uber, and the mobility industry at large, face.

Mobility and AVs are a huge business, with a lot of pressure to deliver products and receive high valuations in from investors and IPOs. That can incentivize misbehavior, whether it be stealing intellectual property or concealing data breaches. Given how central mobility technologies are to people’s daily lives, the public deserves to be able to trust the companies developing and deploying those technologies – something undermined by cases like these.

Over the last few years, emerging mobility technologies from CAVs to e-scooters have become the targets of malicious hackers. CAVs, for example, are complicated machines with many different components, which opens up many avenues for attack. Hackers can reprogram key fobs and keyless ignition systems. Fleet management software used worldwide can be used to kill vehicle engines. CAV systems can be confused with things as simple a sticker on a stop sign. Even the diagnostic systems within a vehicle, which are required to be accessible, can be weaponized against a vehicle by way of a $10 piece of tech.

For mobility-as-a-service (“MaaS”) companies, the security of their networks and user accounts is also at threat. In 2015 a number of Uber accounts were found for sale on the “dark web,” and this year a similar market for Lime scooter accounts popped up. Hacking is not even required in some cases. Car2Go paused service in Chicago after 100 vehicles were stolen by people exploiting the company’s app (the company is now ending service in the city, though they say it’s for business reasons).

The wireless systems used for vehicle connectivity are also a target. On faction in the current battle over radio spectrum is pushing cellular technology, especially 5G tech as the future of vehicle-to-vehicle communication. While 5G is more secure than older wireless networks, it is not widespread in the U.S., leaving vulnerabilities. As some companies push for “over-the-air” updates, where vehicle software is wirelessly updated, unsecure wireless networks could lead to serious vehicle safety issues.

So what can be done to deal with these cybersecurity threats? For a start, there are standard-setting discussions underway, and there have been proposals for the government to step up cybersecurity regulation for vehicles. A California bill on the security of the “internet-of-things” could also influence vehicle security. Auto suppliers are putting cybersecurity into their development process. Government researchers, like those Argonne National Labs outside Chicago, are looking for vulnerabilities up and down the supply chain, including threats involving public car chargers. Given the ever-changing nature of cybersecurity threats, the real solution is “all of the above.” Laws and regulations can spark efforts, but they’ll likely never be able to keep up with evolving threats, meaning companies and researchers will always have to be watchful.

P.S. – Here is a good example of how cybersecurity threats are always changing. In 2018, security researchers were able to hack into a smartphone’s microphone and use it to steal user’s passwords, using the acoustic signature of the password. In other words, they could figure out your password by listening to you type it in.

On April 8, 2019, it was announced at the 35th Space Symposium in Colorado Springs, Colorado that the space industry was getting an Information Sharing and Analysis Center (ISAC). Kratos Defense & Security Solutions, “as a service to the industry and with the support of the U.S. Government,” was the first founding member of the Space-ISAC (S-ISAC).

“[ISACs] helps critical infrastructure owners and operators protect their facilities, personnel and customers from cyber and physical security threats and other hazards. ISACs collect, analyze and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency.”

National Council of ISACs

ISACs, first introduced in Presidential Decision Directive-63 (PDD-63) in 1998, were intended to be the one aspect of the United States’ development of “measures to swiftly eliminate any significant vulnerability to both physical and cyber attacks on our critical infrastructures, including especially our cyber systems.” PDD-63 requested “each critical infrastructure sector to establish sector-specific organizations to share information about threats and vulnerabilities.” In 2003, Homeland Security Presidential Directive 7 (HSPD-7) reaffirmed the relationship between the public and private sectors of critical infrastructure in the development of ISACs.

Today, there are ISACs in place for a number of subsectors within the sixteen critical infrastructure sectors, for specific geographic regions, and for different levels of government.

However, the S-ISAC, while undoubtedly a good call, has left me with a few questions.

Why so much government involvement?

From what I’ve read, the Federal government’s role is to “collaborate with appropriate private sector entities and continue to encourage the development of information sharing and analysis mechanisms.” For example, the Aviation-ISAC (A-ISAC) was formed when “[t]here was consensus that the community needed an Aviation ISAC”; the Automotive-ISAC (Auto-ISAC) came into being when “[fourteen] light-duty vehicle [Original Equipment Manufacturers] decided to come together to charter the formation of Auto-ISAC”; and the Information Technology-ISAC (IT-ISAC) “was established by leading Information Technology Companies in 2000.”

Reportedly, it was not the private actors within the space industry that recognized or felt the need for the S-ISAC, but an interagency body designed to keep an eye on and occasionally guide or direct efforts across space agencies. The Science and Technology Partnership Forum has three principle partner agencies: U.S. Air Force (USAF) Space Command, the National Aeronautics and Space Administration (NASA), and the National Reconnaissance Office (NRO).

Additionally, it appears as though Kratos, a contractor for the Department of Defense and other agencies, was the only private actor involved in the development and formation of the S-ISAC.

These are just something to keep in mind. The S-ISAC’s perhaps unique characteristics must be considered in light of the clear national security and defense interests that these agencies and others have in the information sharing mechanism. Also, since the announcement of the S-ISAC, Kratos has been joined by Booz Allen Hamilton, Mitre Corporation, Lockheed Martin, and SES as founding members.

Why an ISAC?

Again, ISACs are typically the domain of the private owners, operators, and actors within an industry or sector. As new vulnerabilities and threats related to the United States’ space activities have rapidly manifested in recent years and are quickly emerging today, it would seem to make sense for the Federal government to push for the development of an Information Sharing and Analysis Organization (ISAO). ISAOs, formed in response to Executive Order 13691 (EO 13691) in 2015, are designed to enable private companies and federal agencies “to share information related to cybersecurity risks and incidents and collaborate to respond in as close to real time as possible.”

While ISAOs and ISACs share the same goals, there appear to be a number of differences between the two information-sharing mechanisms. ISACs can have high membership fees that individual members are responsible for, potentially blocking smaller organizations or new actors from joining, and that often work to fund the sector’s ISAC; however, grants from the Department of Homeland Security (DHS) are available to provide additional funding for the establishment and continued operation of ISAOs.  ISACs – for example, the A-ISAC – seem to monitor and control the flow of member-provided information available to the Federal government more closely than ISAOs.

Also, ISACs – such as those recognized by the National Council of ISACs (NCI) – are typically limited to sectors that have been designated as Critical Infrastructure and the associated sub-sectors. Despite obvious reasons why it should, space has not been recognized as a critical infrastructure sector.

For now, this seems like a good place to end. This introductory look into ISACs generally and the S-ISAC has left me with many questions about the organization itself and its developing relationship with the private space industry as a whole. Hopefully, these questions and more will be answered in the coming days as the S-ISAC and the private space industry continue to develop and grow. 

Here are some of my unaddressed questions to consider while exploring and considering the new S-ISAC: Why develop the S-ISAC now? What types of companies are welcome to become members, only defense contractors or, for example, commercial satellite constellation companies and small rocket launchers? As the commercial space industry continues to grow in areas such as space tourism, will the S-ISAC welcome these actors as well or will we see the establishment of a nearly-identical organization with a different name?

I previously blogged on automated emergency braking (AEB) standardization taking place at the World Forum for Harmonization of Vehicle Regulations (also known as WP.29), a UN working group tasked with managing a few international conventions on the topic, including the 1958 Agreement on wheeled vehicles standards.

It turns out the World Forum recently published the result of a joint effort undertaken by the EU, US, China, and Japan regarding AV safety. Titled Revised Framework document on automated/autonomous vehicles, its purpose is to “provide guidance” regarding “key principles” of AV safety, in addition to setting the agenda for the various subcommittees of the Forum.

One may first wonder what China and the US are doing there, as they are not members to the 1958 Agreement. It turns out that participation in the World Forum is open to everyone (at the UN), regardless of membership in the Agreement. China and the US are thus given the opportunity to influence the adoption of one standard over the other through participation in the Forum and its sub-working groups, without being bound if the outcome is not to their liking in the end. Peachy!

International lawyers know that every word counts, and every word can be assumed to have been negotiated down to the comma, or so it is safe to assume. Using that kind of close textual analysis, what stands out in this otherwise terse UN prose? First, the only sentence couched in mandatory terms. Setting out the drafters’ “safety vision,” it goes as follows: AVs “shall not cause any non-tolerable risk, meaning . . . shall not cause any traffic accidents resulting in injury or death that are reasonably foreseeable and preventable.”

This sets the bar very high in terms of AV behavioral standard, markedly higher than for human drivers. We cause plenty of accidents which would be “reasonably foreseeable and preventable.” A large part of accidents are probably the result of human error, distraction, or recklessness, all things “foreseeable” and “preventable.” Nevertheless, we are allowed to drive and are insurable (except in the most egregious cases…) Whether this is a good standard for AVs can be discussed, but what is certain is that it reflects the general idea that we as humans hold machines to a much higher “standard of behavior” than other humans; we forgive other humans for their mistakes, but machines ought to be perfect – or almost so.

In second position: AVs “should ensure compliance with road traffic regulations.” This is striking by its simplicity, and I suppose that the whole discussion on how the law and its enforcement are actually rather flexible (such as the kind of discussion this very journal hosted last year in Ann Arbor) has not reached Geneva yet. As it can be seen in the report on this conference, one cannot just ask AVs to “comply” with the law; there is much more to it.

In third position: AV’s “should allow interaction with the other road users (e.g. by means of external human machine interface on operational status of the vehicle, etc.)” Hold on! Turns out this was a topic at last year’s Problem-Solving Initiative hosted by University of Michigan Law School, and we concluded that this was actually a bad idea. Why? First, people need to understand whatever “message” is sent by such an interface. Language may come in the way. Then, the word interaction suggests some form of control by the other road user. Think of a hand signal to get the right of way from an AV; living in a college town, it is not difficult to imagine how would such “responsive” AVs could wreak havoc in areas with plenty of “other road users,” on their feet or zipping around on scooters… Our conclusion was that the AV could send simple light signals to indicate its systems have “noticed” a crossing pedestrian for example, without any additional control mechanisms begin given to the pedestrian. Obviously, jaywalking in front on an AV would still result in the AV breaking… and maybe sending angry light signals or honking just like a human driver would do.

Finally: cybersecurity and system updates. Oof! Cybersecurity issues of IoT devices is an evergreen source of memes and mockery, windows to a quirky dystopian future where software updates (or lack thereof) would prevent one from turning the lights on, flushing the toilet, or getting out of the house… or where a botnet of connected wine bottles sends DDoS attacks across the web’s vast expanse. What about a software update while getting on a crowded highway from an entry ramp? In that regard, the language of those sections seems rather meek, simply quoting the need for respecting “established” cybersecurity “best practices” and ensuring system updates “in a safe and secured way…” I don’t know what cybersecurity best practices are, but looking at the constant stream of IT industry leaders caught in various cybersecurity scandals, I have some doubts. If there is one area where actual standards are badly needed, it is in consumer-facing connected objects.

All in all, is this just yet another useless piece of paper produced by an equally useless international organization? If one is looking for raw power, probably. But there is more to it: the interest of such a document is that it reflects the lowest common denominator among countries with diverging interests. The fact that they agree on something, (or maybe nothing) can be a vital piece of information. If I were an OEM or policy maker, it is certainly something I would be monitoring with due care.