Will a Google car sacrifice you for the sake of the many? (And Networked Road Neutrality)

Data & Society / saved 2014-06-08

Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.

But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?

We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.

It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.

Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.

In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?

But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?

We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.

But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.

Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.

 


Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.

Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.

Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?

Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?

Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?

One thing is sure: It’ll be a golden age for lobbyists.