A pedestrian was killed in Tempe, Arizona by an Uber autonomous car. In 2015, Governor Doug Ducey enticed the self-driving car industry to Arizona by executive order clearing the way for testing in the state. Last month, he updated this order touting Arizona’s “business friendly and low regulatory environment”. Following the crash, Uber has stopped all real-word testing of its autonomous cars, which were happening in San Francisco, Phoenix, Pittsburg and Toronto. The accident is now in the crosshairs of both the U.S. National Highway Traffic Safety Administration and the National Transportation Safety Board.
The recent Cambridge Analytica revelations on Facebook data to help Donald Trump’s campaign is ill-timed for autonomous car companies. And is forcing regulators to increase scrutiny on the level of self-policing that has so far been granted to tech companies generally. The fatality and recent privacy breach revelations will almost certainly adversely impact the pace of autonomous car technology advancement in the U.S.
There are at least two broad black box areas regulators will want to examine and ultimately address. One is conceptually straightforward, while being technically bedeviling. Autonomous cars are trained using AI methods such as deep learning on massive amounts of data to interpret and react to driving conditions. However, unlike traditional statistical predictive methods such as regression analysis, deep learning does not easily lend itself to transparency of decision making, which leaves it with an air of magic about it. This reality will make it difficult for regulators to communicate with an increasingly skeptical public.
The other issue is philosophically much more challenging. Inevitably, autonomous cars are going to be in situations requiring them to make an instantaneous choice between a set of bad outcomes. For example, the decision when an autonomous car is faced with a choice of modest damage to itself versus more material damage to its surroundings. Even more fundamentally, what happens when lives are at stake? How will the car measure tradeoffs and react to them? At some level the processes for making these ethical decisions must be programmed into the car. The public at large is unlikely to accept a Google, BMW, Ford, or an Uber unilaterally making such decisions. Recent headlines on Cambridge Analytica that erode public trust in tech companies, and, now, a self-driving car fatality will force a bright spotlight at the core of autonomous vehicle systems.
The rule of law and consumer protection is a strength of the U.S. At the same time, these strengths could prove to be an impediment in the race for global leadership in the development of autonomous cars, and AI more broadly.