Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible.

SwissCognitive LogoThe research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience , used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios.

The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior. The research showed that moral decisions in the coned scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.

Modelling Moral behavior

Leon Sütfeld, first author of the study, says that until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modeled or described algorithmically, “But we found quite the opposite. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.” This implies that human moral behavior can be well described by algorithms that could be used by machines as well.

The study’s findings have major implications in the debate around the behavior of self-driving cars and other machines, like in unavoidable situations. For example, a leading new initiative from the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has defined 20 ethical principles related to self-driving vehicles, for example, in relation to behavior in the case of unavoidable accidents, making the critical assumption that human moral behavior could not be modeled. […]