- Researchers design an experiment to gather a human perspective on moral decisions made by artificial intelligence.
- They analyzed millions of human responses and revealed how much ethics vary across cultures.
We are entering an era where machines are tasked not only to minimize damage but also to distribute the damage they cannot eliminate. Distribution of damage (as well as well-being the machine creates) usually creates tradeoffs, whose resolution falls within the moral domain.
Consider a situation where an autonomous vehicle is about to crash and can’t evaluate a path that would save everyone. In such cases, vehicles will need to decide how to fairly distribute the risk of damage between different people on the road. These moral dilemmas can’t be resolved by any law of robotics or human ethical principles.
In 2014, MIT researchers designed an experiment named Moral Machine to gather a human perspective on moral decisions made by AI-equipped machines, such as self-driving cars. So far, the experiment has received more than 40 million responses from all over the world. This offers valuable insights into the collective ethical priorities of various cultures.
The Moral Machine tests 13 different scenarios where people decide what self-driving vehicle should prioritize:
- More lives over fewer
- Men over women
- humans over pets
- young over old
- higher social status over lower
- fit over unhealthy
- law-follower over lawbreaker
Researchers analyzed all responses and revealed how human ethics vary on the basis of geographic location, economics, and culture.
They found that different countries have different preferences. For example, people from collectivist cultures like Japan and China are more likely to spare the old over the young. Whereas, countries with more individualistic cultures are less likely to spare the old.
Reference: Nature | DOI:10.1038/s41586-018-0637-6 | Moral Machine
Similarly, people from individualistic cultures, such as the US and UK, tend to spare more lives given all other choices. People from developing countries with weaker institutions are more tolerant of pedestrians who jaywalk versus pedestrians who cross legally.
The preference to spare higher status characters is much less pronounced for Eastern countries (Japan, Taiwan, Saudi Arabia, Indonesia), and much higher for Southern countries (Central and South America, France).
Western and Eastern countries exhibit a much higher preference for sparing humans over pets, compared to Southern countries.
Despite the large sample size, the study has numerous limitations. For example, researchers didn’t consider the uncertainty about the fates of the characters. All characters were recognized as children and adults with 100% certainty, and their life and death outcomes were estimated with 100% certainty.
They also didn’t introduce the hypothetical relationship factor (for example, spouses and relatives) between respondents and characters. Although these assumptions were quite unrealistic, they were necessary to keep the experiment tractable.
Read: Self-Driving Vehicles Find It Hard To Detect Dark-Skinned Pedestrians
We might not reach universal agreement but these results can be used by tech companies and car manufacturers to better understand how the public would react to ethics of different design and policy decisions.
Leave a reply