Cars are getting smarter and safer. And yet this new breed of automobile remains inaccessible to large parts of the consumer base due to high costs. Some of these costs are a natural result of technological advancements in the automobile industry. Others however may be a product of inefficient market dynamics among car manufacturers, insurers and technology companies – which ultimately contribute to a reduced state of safety on our roads.

Automated Driver Assistance Systems (ADAS) that equip cars with services like autonomous braking systems, parking assistance, and blind spot detection are growing at an exponential rate. The global ADAS market size was estimated to be around $14.15 billion in 2016. Since then, it has witnessed a high rate of growth and is expected to reach $67 billion by 2025. Not only is this good news for ADAS developers, it can also significantly increase road safety. The Insurance Institute for Highway Safety estimates that the deployment of automatic emergency braking in most cars on the road, for instance, can prevent 28,000 crashes and 12,000 injuries by 2025.

The biggest roadblock to the easy adoption of ADAS-equipped cars remains its prohibitive cost. Lower rates of adoption not only reduce the overall safety of cars on the road, but also disproportionately affect poorer people. Unsurprisingly, a study in Maryland found that individuals at the upper end of the socioeconomic spectrum have greater access to vehicle safety features leaving those at the lower end at higher risk.

A significant contributing factor to the continued high cost of automated vehicles is the high rate of car insurance. This seems rather counter intuitive. The technological evolution of safety systems reduces the risk of car crashes and other incidents. Consequently, this was expected to cause a decline in insurance premiums. And yet, costs remain high. Insurance companies have resisted the demands for lowering the cost of premiums claiming that the data about ADAS systems and their efficacy in reducing risk is just not conclusive. Moreover, the industry claims, that even if ADAS systems can cause a reduction in the number of vehicular incidents, each incident involving an automated car costs more because of the sophisticated and often delicate hardware such as sensors and cameras installed in these cars. As the executive vice president of Hanover Insurance Group puts it, “There’s no such thing as a $300 bumper anymore. It’s closer to $1,500 in repair costs nowadays.”

There is no doubt that these are legitimate concerns. An industry whose entire business model involves pricing risk can hardly be blamed for seeking more accurate data for quantifying said risk. Unfortunately, none of the actors involved in the automated vehicle industry are particularly forthcoming with their data. At a relatively nascent stage, the AV industry is still highly competitive with large parts of operations shrouded in secrecy. Car manufacturers that operate fleets of automated vehicles and no doubt gather substantial data around crash reports are loathe to share it with insurers out of fears of giving away proprietary information and losing their competitive edge. The consequence of this lack of open exchange is that AVs continue to remain expensive and perhaps improperly priced from a risk standpoint.

There are some new attempts to work around this problem. Swiss Re, for example, is developing a global ADAS risk score that encourages car manufacturers to share data with them that they in turn would use to recommend discounts to insurers. Continental AG has similarly developed a Data Monetization Platform that seemingly allows fleet operators to sell data in a secure and transparent manner to city authorities, insurers and other interested parties. These are early days so whether these initiatives will be able to overcome the insecurities around trade secrets and proprietary data remains to be seen.

It is however clear that along with the evolution of cars and technologies the insurance industry too will need to change. As a recent Harvard Business Review article points out, automated vehicles will fundamentally alter the private car insurance market by shifting car ownership from an individual-centric model to a fleet-centric one, at least in the short to medium term. This shift itself could cost auto insurers nearly $25 billion (or 1/8th of the global market) in revenue from premiums. It is imperative therefore that the insurance industry devise new innovative approaches to price the risk associated with AVs. Hopefully they can do this without further driving up costs and while making safer technologies accessible to those that need it the most.

An important development in artificial intelligence space occurred last month with the Pentagon’s Defense Innovation Board releasing draft recommendations [PDF] on the ethical use of AI by the Department of Defense. The recommendations if adopted are expected to “help guide, inform, and inculcate the ethical and responsible use of AI – in both combat and non-combat environments.”

For better or for worse, a predominant debate around the development of autonomous systems today revolves around ethics. By definition, autonomous systems are predicated on self-learning and reduced human involvement. As Andrew Moore, head of Google Cloud AI and former dean of computer science at Carnegie Mellon University defines it, artificial intelligence is just “the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

How then do makers of these systems ensure that the human values that guide everyday interactions are replicated in decisions that machines make? The answer, the argument goes, lies in coding ethical principles that have been tested for centuries into otherwise “ethically blind” machines.

Critics of this argument posit that this recent trend of researching and codifying ethical guidelines is just one way for tech companies to avoid government regulation. Major companies like Google, Facebook and Amazon have all either adopted AI charters or established committees to define ethical principles. Whether these approaches are useful is still open to debate. One research for example found that priming software developers with ethical codes of conduct had “no observed effect” [PDF] on their decision making. Does this then mean that the whole conversation around AI and ethics is moot? Perhaps not.

In the study and development of autonomous systems, the content of ethical guidelines is only as important as the institution adopting them. The primary reason ethical principles adopted by tech companies are met with cynicism is that they are voluntary and do not in and of themselves ensure implementation in practice. On the other hand, when similar principles are adopted by institutions that consider the prescribed codes as a red lines and have the legal authority to implement them, these ethical guidelines become massively important documents.

The Pentagon’s recommendations – essentially five high level principles – must be lauded for moving the conversation in the right direction. The draft document establishes that AI systems developed and deployed by the DoD must be responsible, equitable, traceable, reliable, and governable. Of special note among these are the calls to make AI traceable and governable. Traceability in this context refers to the ability of a technician to reverse engineer the decision making process of an autonomous system and glean how it arrived at the conclusion that it did. The report calls this “auditable methodologies, data sources, and design procedure and documentation.” Governable AI similarly requires systems to be developed with the ability to “disengage or deactivate deployed systems that demonstrate escalatory or other behavior.”

Both of these aspects are frequently the most overlooked in conversations around autonomous systems and yet are critical for ensuring reliability. They are also likely to be the most contested as questions of accountability arise when machines malfunction as they are bound to. They are also likely to make ‘decision made by algorithm’ a less viable defense when creators of AI are confronted with questions of bias and discrimination – as Apple and Goldman Sachs’ credit limit-assigning algorithm recently was.

While the most direct applications of the DoD’s principles is in the context of lethal autonomous weapon systems, their relevance will likely be felt far and wide. The various private technology companies that are currently soliciting and building various autonomous systems for military use – such as Microsoft’s $10 billion JEDI contract to overhaul the military’s cloud computing infrastructure and Amazon’s facial recognition system used by law enforcement – will likely have to invest in building new fail safes into their systems to comply with the DoD’s recommendations. It is likely that these efforts will have a bleed through effect into systems being developed for civilian use as well. The DoD is certainly not the first institution to adopt these principles. Non-governmental agencies such as the Institute of Electricals and Electronic Engineers (IEEE) – the largest technical professional organization in the world – have also called [PDF] for adoption of standards around transparency and accountability in AI to provide “an unambiguous rationale” for all decisions taken. While the specific questions around which ethical principles can be applied to machine learning continue for the foreseeable future, the Pentagon’s draft could play a key role in moving the needle forward.