This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

The Self-Driving Trolley Problem

You have two options: 

1. Crash a car, killing a woman and a child.

2. Run over four men who are crossing a road at a red light. 

How do you decide? Do you save the greatest number of lives, no matter what? Or do you give preference based on age, or gender, or even lawfulness?

These are the ethical dilemmas which MIT's long-running "trolley problem" simulator, the Moral Machine, forces participants to confront (confront them yourself here!). The famous thought experiment has come to the fore again thanks to its practical implications for the development of self-driving cars. If fatalities are inevitable, how should a car be programmed to react?

The Moral Machine survey shows that finding a universal moral code may be an impossible task. More than 2.3 million people from over 100 countries have participated so far and the results have revealed that there are significant cultural differences between countries on which lives to save: the young or old, the lawful or unlawful, the athletic or unfit. For example, respondents from Finland and Japan preferred to hit pedestrians who were crossing the road illegally, whereas respondents in Nigeria or Pakistan showed no such preference.

This leaves developers of self-driving cars in a difficult position. Do they impose their own moral preferences on others, or do they customise the car's programming to reflect local cultural preferences? Currently we accept that these decisions lie with the human driver. Should we therefore give drivers the right to choose their own preferences when setting up the car? That idea seems abhorrent and would surely lead to the systematisation of biases that has been discussed here before

These issues risk derailing the widespread adoption of a technology that has the potential to make roads, on the whole, much safer. A social consensus may be difficult to achieve, but is something that developers need to strive for. This will mean clearly articulating the trade-offs in the technology and seeking society's, regulators' and ethicists' input in development. MIT's Moral Machine simulator is a good way to start.

Self-driving cars might soon have to make such ethical judgments on their own — but settling on a universal moral code for the vehicles could be a thorny task, suggests a survey of 2.3 million people from around the world. The largest ever survey of machine ethics1, published today in Nature, finds that many of the moral principles that guide a driver’s decisions vary by country.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, standards, automobiles and parts