November 2019

On November 19, the NTSB held a public board meeting on the 2018 Uber accident in Tempe, Arizona, involving an “automated” (actually level 3) Uber-operated Volvo SUV. One woman, Elaine Herzberg, a pedestrian, died in the accident. In the wake of the report, it is now a good time to come back to level 3 cars and the question of “safety drivers.”

Given that the purpose of the meeting was to put the blame on someone, media outlets were quick to pick up a culprit for their headlines: the “safety driver” who kept looking at her phone? The sensors who detected all kinds of stuff but never a person? Uber, who deactivated the OEM’s emergency braking? Or maybe, Uber’s “safety culture”? A whole industry’s?

The Board actually blames all of them, steering clear of singling out one event or actor. It is probably the safest and most reasonable course of action for a regulator, and it has relevant implications for how law enforcement will handle accidents involving AVs in the future. But because we are humans, we may stick more strongly with the human part of the story, that of the safety driver.

She was allegedly looking at her phone, “watching TV” as one article put it; following the latest episode of The Voice. The Board determined that she looked at the road one second before the impact. That is short, but under more normal circumstances, enough to smash the brakes. Maybe her foot was far from the pedal; maybe she just did not react because she was not in an “aware” state of mind (“automation complacency,” the report calls it). In any case, it was her job to look on the road, and she was violating Uber’s policy by using her phone while working as a safety driver.

At the time of the accident, the Tempe police released footage from the dash cam, a few seconds up to the impact, showing a poorly-lit street. The relevance of this footage was then disputed in an Ars Technica article which aims to demonstrate how actually well lit the street is, and how just the front lights of the car should have made the victim visible on time. Yet, I think it is too easy to put the blame on the safety driver. She was not doing her job, but what kind of job was it? Humans drive reasonably well, but that’s when we’re actually driving, not sitting in the driver seat with nothing else to do but to wait for something to jump out of the roadside. Even if she had been paying attention, injury was reasonably foreseeable. And even if she would have been driving in broad daylight, there remains a more fundamental problem besides safety driver distraction.

The [NTSB] also found that Uber’s autonomous vehicles were not properly programmed to react to pedestrians crossing the street outside of designated crosswalksone article writes. I find that finding somewhat more appalling than that of a safety driver being distracted. Call that human bias; still I do not expect machines to be perfect. But what this tells us is that stricter monitoring of cellphone usage of safety drivers will not cut it either, if the sensors keep failing. The sensors need to be able to handle this kind of situation. A car whose sensors cannot recognize a slowly crossing pedestrian (anywhere, even in the middle of the highway) does not have its place on a 45-mph road, period.

If there is one thing this accident has shown, it is that “safety drivers” add little to the safety of AVs. It’s a coin flip: the reactivity and skill of the driver makes up for the sensor failure; in other cases, a distracted, “complacent” driver (for any reason, phone or other) does not make up for the sensor failure. It is safe to say that the overall effect on safety is at best neutral. And even worse: it may provide a false sense of safety to the operator, as it apparently did here. This, in turn, prompts us to think about level 3 altogether.

While Uber has stated that it has “significantly improved its safety culture” since the accident, the question of the overall safety of these level 3 cars remains. And beyond everything Uber can do, one may wonder if such accidents are not bound to repeat themselves should level 3 cars see mass commercial deployments. Humans are not reliable “safety drivers.” And in a scenario that involves such drivers, it takes much less than the deadly laundry list of failures we had here to have such an accident happen. Being complacent may also mean that your foot is not close to the pedals, or that your hands are not “hovering above the steering wheel” as they should (apparently) be. That half second extra it takes to smash the brakes or grip the wheel is time enough to transform serious injury into death.

The paramount error here was to integrate a human, a person Uber should have known would be distracted or less responsive than an average driver, as a final safety for sensor failure. Not a long time ago, many industry players were concerned about early standardization. Now that some companies are out there, going fast and literally breaking people (not even things, mind you!), time has come to seriously discuss safety and testing standards, at the US federal and, why not, international level.

A University of Michigan Law School Problem Solving Initiative class on AV standardization will take place during the Winter semester of 2020, with deliverables in April. Stay tuned!

An important development in artificial intelligence space occurred last month with the Pentagon’s Defense Innovation Board releasing draft recommendations [PDF] on the ethical use of AI by the Department of Defense. The recommendations if adopted are expected to “help guide, inform, and inculcate the ethical and responsible use of AI – in both combat and non-combat environments.”

For better or for worse, a predominant debate around the development of autonomous systems today revolves around ethics. By definition, autonomous systems are predicated on self-learning and reduced human involvement. As Andrew Moore, head of Google Cloud AI and former dean of computer science at Carnegie Mellon University defines it, artificial intelligence is just “the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

How then do makers of these systems ensure that the human values that guide everyday interactions are replicated in decisions that machines make? The answer, the argument goes, lies in coding ethical principles that have been tested for centuries into otherwise “ethically blind” machines.

Critics of this argument posit that this recent trend of researching and codifying ethical guidelines is just one way for tech companies to avoid government regulation. Major companies like Google, Facebook and Amazon have all either adopted AI charters or established committees to define ethical principles. Whether these approaches are useful is still open to debate. One research for example found that priming software developers with ethical codes of conduct had “no observed effect” [PDF] on their decision making. Does this then mean that the whole conversation around AI and ethics is moot? Perhaps not.

In the study and development of autonomous systems, the content of ethical guidelines is only as important as the institution adopting them. The primary reason ethical principles adopted by tech companies are met with cynicism is that they are voluntary and do not in and of themselves ensure implementation in practice. On the other hand, when similar principles are adopted by institutions that consider the prescribed codes as a red lines and have the legal authority to implement them, these ethical guidelines become massively important documents.

The Pentagon’s recommendations – essentially five high level principles – must be lauded for moving the conversation in the right direction. The draft document establishes that AI systems developed and deployed by the DoD must be responsible, equitable, traceable, reliable, and governable. Of special note among these are the calls to make AI traceable and governable. Traceability in this context refers to the ability of a technician to reverse engineer the decision making process of an autonomous system and glean how it arrived at the conclusion that it did. The report calls this “auditable methodologies, data sources, and design procedure and documentation.” Governable AI similarly requires systems to be developed with the ability to “disengage or deactivate deployed systems that demonstrate escalatory or other behavior.”

Both of these aspects are frequently the most overlooked in conversations around autonomous systems and yet are critical for ensuring reliability. They are also likely to be the most contested as questions of accountability arise when machines malfunction as they are bound to. They are also likely to make ‘decision made by algorithm’ a less viable defense when creators of AI are confronted with questions of bias and discrimination – as Apple and Goldman Sachs’ credit limit-assigning algorithm recently was.

While the most direct applications of the DoD’s principles is in the context of lethal autonomous weapon systems, their relevance will likely be felt far and wide. The various private technology companies that are currently soliciting and building various autonomous systems for military use – such as Microsoft’s $10 billion JEDI contract to overhaul the military’s cloud computing infrastructure and Amazon’s facial recognition system used by law enforcement – will likely have to invest in building new fail safes into their systems to comply with the DoD’s recommendations. It is likely that these efforts will have a bleed through effect into systems being developed for civilian use as well. The DoD is certainly not the first institution to adopt these principles. Non-governmental agencies such as the Institute of Electricals and Electronic Engineers (IEEE) – the largest technical professional organization in the world – have also called [PDF] for adoption of standards around transparency and accountability in AI to provide “an unambiguous rationale” for all decisions taken. While the specific questions around which ethical principles can be applied to machine learning continue for the foreseeable future, the Pentagon’s draft could play a key role in moving the needle forward.

Developments in technology have led to an increased reliance on artificial intelligence and autonomy in various vehicles such as cars, planes, helicopters and trains. The latest vehicles to implement autonomous technology into their operations are shipping vessels. Autonomous ships will transform the industry and current regulations are being reassessed to determine the best way to include this futuristic way of shipping.

he shipping industry is regulated on a global level and it remains one of the most heavily regulated industries today. International shipping is principally regulated by the International Maritime Organization, a United Nations agency responsible for the safety of life at sea and the protection of the marine environment. The International Maritime Organization (IMO) developed a comprehensive framework of global maritime safety regulations that was adopted from international conventions. In order to be proactive, IMO initiated a regulatory scoping exercise on Maritime Autonomous Surface Ships (MASS). The scoping exercise is led by IMO’s Maritime Safety Committee and is expected to be completed by 2020. The goal of the exercise is to determine how autonomous ships may be implemented into regulations and will touch on issues such as safety, security, liability, the marine environment and the human element.

In order to assess the scope of differing levels of autonomous ships, IMO defined four degrees of autonomy. The lowest degree of autonomy involves automated processes that can control the ship at times. Seafarers will remain in charge of operating and controlling the ship when the automated system is not activated. The second degree is a remotely controlled ship with seafarers still on board. The ship will be controlled from another location but the seafarers on board will be able to take control if necessary. The next degree is a remotely controlled ship without any seafarers on board. Lastly, the highest degree of autonomy is a fully autonomous, unmanned ship that is equipped with the ability to make decisions and take action by itself.

Several companies have already begun implementing autonomous capabilities into their ships and the technology is rapidly developing. While the scoping exercise is underway, the Maritime Safety Committee approved interim guidelines for trials to be completed on existing and emerging autonomous ships. The trials should be generic and goal-based and take a precautionary approach to ensure the operations are safe, secure, and environmentally sound. In 2018, Rolls-Royce conducted its first test of an autonomous ferry named Falco. To demonstrate two degrees of autonomy, the ferry was fully autonomous on its outward voyage and then switched to a remotely controlled operation on its return to port. The controller was in a command center 30 miles away and he successfully took over operations of the ship and guided it to the dock.

Autonomous ships are expected to improve safety, reduce operating costs, increase efficiency and minimize the effects of shipping on the environment. An increased reliance on autonomy will reduce the chance for human error thereby improving safety. Human error accounts for 75-96% of marine accidents and accounted for $1.6 billion in losses between 2011 and 2016. Operational costs are also expected to decrease as there will be little to no crew on board. Crew costs can constitute up to 42% of a ship’s operating costs. If there is no crew then accommodations such as living quarters, air conditioning and cooking facilities can be eliminated. Further, a ship free from crew accommodations and seafarers will make voyages more efficient because the ship will have an alternate design and an increased carrying capacity. Lastly, autonomous ships may prove to be better for the environment than current vessels. The ships are expected to operate with alternative fuel sources, zero-emissions technologies and no ballast. 

As we have seen in other transportation industries, regulation for autonomous vehicles falls far behind the technological innovation. By taking a proactive approach in the case of autonomous shipping, IMO may be ready to create regulations that better reflect the future of shipping within the next decade. 

Last time I wrote about platooning, and the potential economic savings that could benefit the commercial trucking sector if heavy duty trucks were to implement the technology. This week, I’m writing about one of the current barriers to implementing platooning both as a commercial method, and in the larger scheme of highway driving.

One of the most readily identifiable barriers to the widespread implementation of truck platooning is the ‘Following Too Close’ (“FTC”) laws enforced by almost every state. There is currently a patchwork of state legislation which prevents vehicles from following too closely behind another vehicle. Violating these laws is negligence per se.

For those who don’t quite remember 1L torts, negligence per se essentially means “if you violate this statute, that proves an element of negligence.” Therefore, if one vehicle is following too closely behind another vehicle in violation of an FTC statute, that satisfies the breach element of negligence and is likely enough to be fined for negligent driving.

These laws are typically meant to prevent vehicles from following dangerously close or tailgating other vehicles. The state laws that regulate this conduct can be divided into roughly four categories. Some states prescribe the distance or time a driver must remain behind the vehicle in front of them; others impose a more subjective standard. The subjective standards are far more common than the objective standards.

Subjective Categories

  • Reasonable and Prudent” requires enough space between vehicles for a safe stop in case of an emergency. This FTC rule is the most common for cars and seems to be a mere codification of common-law rules of ordinary care.
  • “Sufficient space to enter and occupy without danger” requires trucks and vehicles with trailers to leave enough space that another vehicle may “enter and occupy such space without danger.” This is the most common rule for trucks.

Objective Categories

  • Distance-Based: Some states prescribe the distance at which a vehicle may follow another vehicle; others identify a proportionate interval based on distance and speed. These are the most common rules for heavy trucks and frequently set the minimum distance between 300 and 500 feet.
  • Time: Timing is the least common FTC, but the two jurisdictions that impose this rule require drivers to travel “at least two seconds behind the vehicle being followed.”

It is easy to see how, given the close distance at which vehicles need to follow to benefit from platooning, any of these laws would on their face prohibit platooning within their borders. However, several states have already enacted legislation which exempts the trailing truck in a platoon from their “Following Too Close” laws. As of April 2019, 15 states had enacted legislation to that effect. Additional states have passed legislation to allow platoon testing or pilot programs within their states.

However, despite some states enacting this legislation, a non-uniform regulatory scheme does not provide  the level of certainty that will incentivize investment in platooning technology. Uncertain state regulation can disincentivize interstate carriers from investing in platooning, and could lead to a system where platooning trucks only operate within single state boundaries.

Although the exemptions are a step in the right direction, non-uniformity will likely result in an overall lower platooning usage rate, limiting the wide-spread fuel efficiency and safety benefits that are derived when platooning is implemented on a large, interstate scale. Without uniform legislation that allows platooning to be operated consistently across all the states, the need for different systems will hinder the technology’s development, and the rate at which trucking companies begin to adopt it.

However, even if not all states pass legislation exempting platooning vehicles from their FTC laws, there could be a way around the subjective elements. The most common subjective law, “Reasonable and Prudent” requires only enough space that the vehicles can safely stop in case of an emergency. When considering a human driver this distance is likely dozens of feet, given the speed at which cars travel on the interstate. However, recall from last week that platooning vehicles are synchronized in their acceleration, deceleration, and braking.

If the vehicles travel in tandem, and brake at the same time and speed, any distance of greater than several feet would be considered “reasonable and prudent.” Perhaps what needs to be developed is a “reasonable platooning vehicle” standard, rather than a “reasonable driver” standard, when it comes to autonomous vehicle technology. Then again, considering the ever-looming potential for technological failure, it could be argued that following that close behind another heavy vehicle is never reasonable and prudent, once again requiring an exemption rather than an interpretive legal argument for a new “reasonableness” standard.

Either way, to ensure certainty for businesses, more states should exempt platooning vehicles from their “Following Too Close” laws. Otherwise, the technology may never achieve a scale that makes it worth the early investment.

On April 8, 2019, it was announced at the 35th Space Symposium in Colorado Springs, Colorado that the space industry was getting an Information Sharing and Analysis Center (ISAC). Kratos Defense & Security Solutions, “as a service to the industry and with the support of the U.S. Government,” was the first founding member of the Space-ISAC (S-ISAC).

“[ISACs] helps critical infrastructure owners and operators protect their facilities, personnel and customers from cyber and physical security threats and other hazards. ISACs collect, analyze and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency.”

National Council of ISACs

ISACs, first introduced in Presidential Decision Directive-63 (PDD-63) in 1998, were intended to be the one aspect of the United States’ development of “measures to swiftly eliminate any significant vulnerability to both physical and cyber attacks on our critical infrastructures, including especially our cyber systems.” PDD-63 requested “each critical infrastructure sector to establish sector-specific organizations to share information about threats and vulnerabilities.” In 2003, Homeland Security Presidential Directive 7 (HSPD-7) reaffirmed the relationship between the public and private sectors of critical infrastructure in the development of ISACs.

Today, there are ISACs in place for a number of subsectors within the sixteen critical infrastructure sectors, for specific geographic regions, and for different levels of government.

However, the S-ISAC, while undoubtedly a good call, has left me with a few questions.

Why so much government involvement?

From what I’ve read, the Federal government’s role is to “collaborate with appropriate private sector entities and continue to encourage the development of information sharing and analysis mechanisms.” For example, the Aviation-ISAC (A-ISAC) was formed when “[t]here was consensus that the community needed an Aviation ISAC”; the Automotive-ISAC (Auto-ISAC) came into being when “[fourteen] light-duty vehicle [Original Equipment Manufacturers] decided to come together to charter the formation of Auto-ISAC”; and the Information Technology-ISAC (IT-ISAC) “was established by leading Information Technology Companies in 2000.”

Reportedly, it was not the private actors within the space industry that recognized or felt the need for the S-ISAC, but an interagency body designed to keep an eye on and occasionally guide or direct efforts across space agencies. The Science and Technology Partnership Forum has three principle partner agencies: U.S. Air Force (USAF) Space Command, the National Aeronautics and Space Administration (NASA), and the National Reconnaissance Office (NRO).

Additionally, it appears as though Kratos, a contractor for the Department of Defense and other agencies, was the only private actor involved in the development and formation of the S-ISAC.

These are just something to keep in mind. The S-ISAC’s perhaps unique characteristics must be considered in light of the clear national security and defense interests that these agencies and others have in the information sharing mechanism. Also, since the announcement of the S-ISAC, Kratos has been joined by Booz Allen Hamilton, Mitre Corporation, Lockheed Martin, and SES as founding members.

Why an ISAC?

Again, ISACs are typically the domain of the private owners, operators, and actors within an industry or sector. As new vulnerabilities and threats related to the United States’ space activities have rapidly manifested in recent years and are quickly emerging today, it would seem to make sense for the Federal government to push for the development of an Information Sharing and Analysis Organization (ISAO). ISAOs, formed in response to Executive Order 13691 (EO 13691) in 2015, are designed to enable private companies and federal agencies “to share information related to cybersecurity risks and incidents and collaborate to respond in as close to real time as possible.”

While ISAOs and ISACs share the same goals, there appear to be a number of differences between the two information-sharing mechanisms. ISACs can have high membership fees that individual members are responsible for, potentially blocking smaller organizations or new actors from joining, and that often work to fund the sector’s ISAC; however, grants from the Department of Homeland Security (DHS) are available to provide additional funding for the establishment and continued operation of ISAOs.  ISACs – for example, the A-ISAC – seem to monitor and control the flow of member-provided information available to the Federal government more closely than ISAOs.

Also, ISACs – such as those recognized by the National Council of ISACs (NCI) – are typically limited to sectors that have been designated as Critical Infrastructure and the associated sub-sectors. Despite obvious reasons why it should, space has not been recognized as a critical infrastructure sector.

For now, this seems like a good place to end. This introductory look into ISACs generally and the S-ISAC has left me with many questions about the organization itself and its developing relationship with the private space industry as a whole. Hopefully, these questions and more will be answered in the coming days as the S-ISAC and the private space industry continue to develop and grow. 

Here are some of my unaddressed questions to consider while exploring and considering the new S-ISAC: Why develop the S-ISAC now? What types of companies are welcome to become members, only defense contractors or, for example, commercial satellite constellation companies and small rocket launchers? As the commercial space industry continues to grow in areas such as space tourism, will the S-ISAC welcome these actors as well or will we see the establishment of a nearly-identical organization with a different name?

Nowadays it seems like everyone wants to get in on the rapidly-growing commercial space industry, reportedly worth approximately $340 billion per year. From Stratolaunch Systems’ “world’s largest plane, which acts as a launch pad in the sky,” to NASA’s Space Act Agreements (SAA) with Boeing and SpaceX for taxi services to and from the International Space Station (ISS), this is certainly not your parents’ space race.

While the private space industry of today may not have bloomed until after we entered the 21st century, the United States’ love affair with space activities in the private sector can be traced back to the 1960’s, although it was the passage of the Commercial Space Launch Act in 1984 that really lit a fire under private industry. It goes without saying that a lot has changed in the years between then and now.

As a matter of fact, the private space sector as we know it today has a term all its own: NewSpace.

“Alt.space, NewSpace, entrepreneurial space, and other labels have been used to describe approaches to space development that different significantly from that taken by NASA and the mainstream aerospace industry.”

HobbySpace.com

NewSpace is a move away from the traditional understanding of space being the domain of government agencies alone and a step toward more affordable access to space. This transition has allowed for the incredible growth and expansion of the economic endeavors within the private space sector, and it’s only expected to get bigger and more profitable as time and developments continue to advance.

However, beyond the incredible news stories about “the world’s first commercial Spaceline” and Elon Musk sending his car into space – which you can track here, by the way – there is an entire universe of issues and concerns that do and/or will cause hiccups and delays to entering the first space tourists into orbit.

One of the first concerns that comes to mind is often that of safety. Saying that there are a few safety concerns relating to commercial space transportation would be putting it very, very lightly. Risks and dangers plague every step of the process, from launchpad to landing. I am all for scientific inquiry and experimentation, but unfortunately this is one area where trial and error has a good chance of ending in both the loss of equipment and the loss of life.

Commercial space transportation is still a fairly high-risk industry in terms of safety, and the responsibility to develop safety regulations for the U.S. commercial space transportation industry rests with the Federal Aviation Administration (FAA) Office of Commercial Space Transportation (AST). The AST issues licenses and experimental permits for launch or reentry vehicles and spaceports after the issuance of a safety approval.

According to the AST website, the FAA “has the authority to issue a safety approval for one or more of the following safety elements: a launch vehicle, a reentry vehicle, a safety system, process, service, or any identified component thereof, and qualified and trained personnel performing a process or function related to licensed launch activities.”

I will stop myself here (for now), but this is just a drop in the bucket. There are plenty of topics surrounding commercial space flight that this post didn’t discuss, such as issues with funding, the minefield that is space debris, and the question of whose law governs in space. While this may seem like a lot, be reassured by the fact that this means we all may have the chance to live out that childhood (adulthood) dream of being an astronaut.

One of the most exciting and economically advantageous aspects of autonomous vehicle technology is the ability for cars and heavy trucks to “platoon.” Platooning is a driver-assist technology that allows vehicles to travel in tandem, maintaining a close, constant distance. Imagine trucks are racers in a bicycle or foot race. By drafting closely behind one another, the vehicles reduce their energy (fuel) consumption.

I personally find that large-scale platooning should be the ultimate goal of autonomous vehicle technology; the potential time and fuel savings would be enormous if the highways were filled with vehicles drafting behind one another. Imagine a highway system without rubberneckers, the guy on the highway that floors it, and then slams on the breaks during rush hour, or the “Phantom Traffic Jam.” Imagine an organized “train” of cars and trucks instead, following at a close, but technologically safe distance (between 25 and 75-feet) and at a uniform speed.

This future is more likely to begin on a smaller scale, and in the commercial shipping sector, rather than in the consumer vehicle market. The work has already started with some platooning pilot programs involving heavy trucks.

These programs employ short-range communications technology and advanced driver assistance systems in their testing. The technology creates a seamless interface supporting synchronized actions; however, drivers are still needed to steer and monitor the system. When done with heavy commercial trucks — tractor-trailers, 18-wheelers, or semi-trucks (depending on what area of the country you live in) — the trucks are “coupled” through vehicle-to-vehicle (V2V) communication. The V2V technology allows the vehicles to synchronize acceleration, deceleration, and braking to increase efficiency and safety.

The economic incentives for platooning in the freight industry derive from the potential fuel savings, which come from reductions to aerodynamic drag. While both vehicles in a pair of platooning trucks save fuel, the rear vehicle typically saves significantly more. Tests conducted by the National Renewable Energy Laboratory demonstrated average fuel savings up to 6.4 percent for a pair of platooning trucks: a lower amount (up to 5.3 percent) for the lead truck and a higher amount (up to 9.7 percent) for the trailing truck. These numbers varied based on the size of the gap between the two trucks, and the driving speed. The ability to decrease fuel consumption in heavy freight vehicles represents an enormous area to reduce the cost of shipping.

Fuel costs account for roughly one-third of the trucking industries’ cost per mile; a typical heavy-duty freight vehicle incurs between $70,000 and $125,000 in fuel costs each year. Vehicles that reduce their fuel consumption by 6.4 percent would save $4,500 to $8,000 per year. These savings are potentially enormous when extrapolated across the more than 2-million tractor-trailers on the road. The ability to decrease shipping and transportation costs should be a substantial incentive for large shipping companies like Fed Ex, UPS, and Amazon. 

While getting the significant players in the transportation industry is crucial, an estimated 90% of the trucking companies in the U.S. are made up of fleets with six trucks or less, and 97% have fewer than 20. Converting existing truck cabs with the necessary technology could pose a substantial hardship for these small businesses. However, it is projected that owner-operators would recoup their investment in 10-months, and fleet operators would recoup theirs in 18-months. This relatively short period could incentivize even small-scale operators to invest in the technology.

Platooning technology could also help offset the recent spike in the average cost of truck operations. Most of these costs came from increases in driver wages and benefits, likely due to a shortage of long-haul truck drivers. The shortage of drivers is only expected to grow; the combination of long hours, inconsistent schedules, long stretches of solitude, and low pay have increased the turnover rate and disincentivized new drivers from entering the labor market. While the technology is not yet poised to run without drivers, a single truck driver would one day lead a platoon train of autonomous trucks, decreasing the need for drivers in every cab.

My vision of a highway filled with platooning vehicles may not be feasible yet, but with proper investment by businesses, platooning technology could become viable, and cost-effective, within a few years.

2018 was the year of the electric scooter. They appeared unexpectedly, lined up on sidewalks, often without enough time for city regulators and officials to prepare for their arrival. Their spontaneous presence and practically unregulated use provoked outrage from consumers, city councils, and sidewalk users everywhere.

If 2018 was the year of the electric scooter, 2020 might be the year of the electric moped. Revel, the New York-based electric moped start-up, has placed more than 1,400 mopeds across Washington, D.C., and Brooklyn and Queens, New York, with plans to expand to 10 cities by mid-2020.

Revel’s mopeds operate in much the same manner as the many electric scooters offered by companies like Spin, Lime, and Bird. Riders sign up, pay for, and lock/unlock the vehicles through an app. But where scooters are suitable for last-mile travel, mopeds may fill a medium-trip sized gap in micro-mobility. Mopeds are better for longer trips where being able to sit down and travel at faster speeds is desirable. They are a good compliment, not a rival, to other micro mobility services. The more mobility services available to the public, the more comfortable people will be using them. Overcoming the threshold is important to increasing the use of alternative transportation services.

However, in stark contrast to the drop and run business method initially employed by many electric scooter companies, Revel differentiates itself by emphasizing safety and garnering regulatory approval before deploying. When Washington D.C. announced in August that the city was launching a demonstration pilot for “motor-driven cycles” (“mopeds”), Revel CEO Frank Reig expressed immediate interest in participation:

“We share their goals of providing new, reliable transportation options that work seamlessly in the city’s current regulatory, transportation, and parking systems and help the District meet its aggressive carbon emissions goals.”

Revel’s policy is not just to work with regulators when required; they seek to foster a cooperative environment that sets the company up for long term success and partnership with the cities where the mopeds eventually deploy. Whereas many cities have banned scooters, temporarily or permanently, working upfront with city officials may benefit Revel in the long-term — potentially protecting them from being required to pull their vehicles from city streets.

The cooperative method should provide an example of conduct to other micro-mobility companies seeking to expand their operations; sometimes, it is better to ask permission rather than forgiveness. The goodwill from the city may pay off in the long run if local governments decide to limit how many companies may operate in the city. They also avoid the potential regulatory gap that electric scooter fall into; mopeds are definitely a motor vehicle, CEO Reig has made sure to emphasize:

These mopeds are motor vehicles. This means there is no regulatory gray area: you have to have a license plate. To get that license plate, you have to register each vehicle with the Department of Motor Vehicles in each state and show third-party auto liability insurance. And then because it’s a motor vehicle, it’s clear that it rides in the street, so we’re completely off sidewalks.

Another area of differentiation is safety and employment. Revel’s mopeds are limited to riders aged 21 and older, capped at speeds of 30 miles-per-hour, provide riders with two helmets, and require riders to submit their driver’s license for a safe history driving check. Moreover, unlike electric scooter companies that rely on people working in the so-called “gig-economy” to charge their scooters, Revel relies on full-time employees to swap out batteries on the vehicles. This employment structure is another selling point for cities: full-time jobs and payroll taxes. The company is making an investment that other mobility companies that operate on an independent contractor model do not make. The relationship provides benefits for the cities and Revel, according to CEO Reig:

Our biggest lesson from New York and Washington is that Revel works for cities as they exist today. They work for our riders. They work for our regulators who are seeking ways to enhance their transportation networks, not disrupt them.

After receiving nearly $27 million in Series A funding, including an investment by Toyota AI Ventures, Revel could potentially increase its vehicle fleet 10-fold, aiding them in meeting their ambitious expansion plans by the middle of next year.