The Dark Questions Self-Driving Cars Raise

Google Self-Driving Car.jpg

By Michael Nurnberger

The “automobile” means “self-moving,” and in the future, this may well become the understood nomenclature if self-driving cars become the norm.  In this future, drinking and driving may no longer be a crime, and sleeping at the wheel will not be a issue. This is, of course, contingent upon self-driving cars taking you to and from your destination with a slurred “home,” or “work.” Google's safety record for their vehicles is excellent, considering the nature of their accidents. Fender benders  caused by people running into the self-driving car, usually at stop-lights, are the overwhelming majority of the few accidents so far, most of them when they are moving around five miles per hour. People appear to be prone to running into the back of the car. The worst accident, and only injury that has resulted so far, was in such a collision. In all of these, the most interesting incident occurred when two self-driving cars narrowly avoided hitting one another.

Self-driving cars are an objectively safer alternative to normal human driving, in normal circumstances. This much is clear. Google has even posted the accident that led to the minor injury, and it’s quite telling. If the world was to have only self-driving cars, it would be a much safer one. However, there are potential dangers to it, as well. In a terrifying experience, Charlie Miller and Chris Valasek hack a Ford Escape, disabling its brakes, jerking the steering wheel around, and forcing the brakes on.  As might be expected, being able to remotely control someone’s vehicle creates an entire new niche of car security issues when the car is doing the driving. A security bill introduced by Senators Edward Markey and Richard Blumenthal would hopefully help to protect consumers from data collection as well as the hacking of their vehicles by enhancements to vehicle security measures.

How will vehicles react to not only hacking through electronics, but attempts to exploit the self-driving program? This creates a problem in how self-driving vehicles will respond to certain quandaries. If someone runs in front of the car with a gun, how will the car react? This is an extreme example and hopefully one you would never see, but in this situation, the creators of this program could become liable if harm comes to the passengers of the vehicle depending on the response of the vehicle.

What if a situation arises where the car needs to be able to respond to stimuli and potentially cause damage to others in order to save the driver? This creates a liability nightmare of responsibility. If the vehicle strictly and narrowly obeys traffic laws, it can still fail through inaction. For example, if there is a carjacking, how does the vehicle respond? It can call 911, but will it let the danger supercede the breaking of the law? In a scenario where the human is in control, they might slam down on the pedal. The AI of the car has to account for a ludicrous amount of scenarios that might possibly happen, one chance in a thousand or a million, but which might occur. The liability lawsuits would come from every direction to exploit such a flaw. After all, who wants to purchase a vehicle that doesn’t take the driver’s safety as priority number one? But who would want to live in a world where autonomous cars (quite plausibly, if the right scenario occurred) would readily be willing to sacrifice a school bus to save their single passenger at the expense of a full school bus? There is no single “right” answer to these kind of problems, which is why autonomous car manufacturers like Google are quick to sidestep this morbid line of questioning in lieu of touting the (admittedly spectacular) safety record of their driverless vehicles.

For now, companies continue testing, and those tests appear to be going quite well, with the Google cars being made as safe as possible and as convenient as possible, and every accident being the fault of those that hit them. On the other hand, they also have the potential to harm, through either their action in judging situations or their inaction in dangerous ones. This is why you should expect more articles from ethicists and AI programmers alike imploring: “Why Driverless Cars Must be Programmed to Kill.” It is, of course, a morbid thought. However with technology able to mitigate the loss of life, it would be negligent of driverless car manufacturers not to attempt to solve this modern-day trolley problem.

 

Andrew Hendricks

Editor-in-Chief and founder of HumanCreativeContent.com, a website that serves as a hub for freelancers to get new material workshopped and published, as well as an on-demand content platform for websites and new businesses.