By: Alex Wall
Imagine you are the driver of a runaway train.
The brakes on your train have failed, but you still have the ability to steer the train from the main track to a single alternate track. You can see the two tracks ahead of you; on the main track there are five workers, on the alternate track there is one. Both tracks are in narrow tunnels so whichever track the train takes, anyone on that track will surely be killed. Which way do you steer the train? Do you let it continue down the main track, killing five, or do you switch it onto the alternate track, killing one?
This is the famous Trolley Problem. First formulated by Philippa Foot in 1967, it has been a staple in ethics and moral philosophy courses ever since. The reason why is clear: however you answer it reveals something about your gut sense of what is right and wrong—what we call moral intuition—and, by thinking through different versions of the problem, it is possible to try to refine those moral intuitions into explicit moral principles.
For example, most people respond to the Trolley Problem by saying that they would steer the train onto the alternate track. Their gut feeling, or moral intuition, is that it is better to pick the option that kills only one person rather than the option that kills five.
If you were to express this moral intuition as a principle, you might say that “five lives are worth more than one”, or that “the needs of the many outweigh the needs of the few”. This sort of utilitarian principle matches what most people report as their gut reaction to the original Trolley Problem.
But what happens when the problem is modified? One famous variation on the Trolley Problem, first offered by Judith Jarvis Thomson in 1976, is as follows:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
Again, most people have a strong gut reaction to this version of the problem, viewing it as clearly wrong to push the fat man onto the tracks. This includes the vast majority of people who said they would switch the train onto the alternate track, sacrificing one in order to save five. In the previous case, their moral intuition lined up with the simple utilitarian principle that you should do whatever will save the most lives. But now, faced with the scenario where they would have to actively throw a person onto the tracks, their moral intuition rejects such a principle.
This is the power of the Trolley Problem. What initially seems to be an easy task of turning moral intuitions into moral principles proves to be considerably more difficult as new versions of the problem are introduced and we try to reconcile the differences in our answers.
Now in the age of self-driving cars, we have a fascinating new variation to the Trolley Problem: what if you didn’t get to choose in the moment, but rather the choice was determined by the vehicle? In this case, the choice, insofar as it is still appropriate to use that world, will be made by the programmers of the vehicle. They will have to anticipate all sorts of possible no-win scenarios that their vehicles might find themselves in, and create rules that the vehicles will follow to make their decisions.
But can the clear rules-based language of computer code handle the nuances of ethical problems? Can programmers solve the Trolley Problem using explicit principles rather than vague gut feelings? Or will they find, like so many students in philosophy classes have before them, that their attempts to refine their moral intuitions into principles fail to account for some new situation that comes up?
Self-driving cars give the Trolley Problem a new relevance. Only now the problem isn’t a thought experiment in a classroom, but a steel-and-glass one on the roads.