Active Hackathon

The moral machine: Who lives, who dies, you decide!

Once you make a decision on how machines should resolve moral tradeoffs, the question is how to actually implement it?
Listen to this story

Imagine. At some point in a not-so-distant future, you’re driving down the highway in a self-driving car, boxed in on all sides by other vehicles. Inevitably, you might find yourself stuck in a life-threatening situation where your car won’t be able to stop in time to avoid a collision. 

It has a choice—either collide with one of the other vehicles endangering another passenger’s life or put your life in harm’s way. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

What do you think it would do?

If we were driving a car in manual mode, whichever way we chose, it would be considered a reaction to the situation as opposed to a deliberate decision—an instinctual, potentially panicked reaction with no forethought or malice. 

However, if a programmer were to instruct the car to take the same call in a life-threatening situation, it could be interpreted as a premeditated homicide. A programmed, self-driving vehicle would, at some point, take a life to save another. 

So, who do we tell it to save when morality dictates saving both lives?

The moral machine experiment is all about finding answers to such morally grim questions.

Created by researchers Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan, the online experimental platform explores the moral dilemmas faced by autonomous vehicles. 

The story behind the moral machine

When Syrian informatics engineer Edmond Awad enrolled himself in an introductory course on AI, he unknowingly entered a world that would forever alter his perception of life.

“I was fascinated by the concepts of many AI techniques like neural networks and genetic algorithms. It pushed me to read more about it. Then, when I went to grad school, I chose to work for my master’s and PhD on topics in multi-agent systems and symbolic AI. I also had a special interest in morality, culture, and religions. So, in 2015—right before AI ethics became popular—as I was about to start a programme at the MIT Media Lab, my advisor Iyad Rahwan told me about this paper he had with Jean-François Bonnefon and Azim Shariff on the ethics of automated vehicles (which was eventually published in Science). I was excited to learn that there is a potential research topic that brings together my interests in AI and ethics”, Awad tells Analytics India Magazine.

Upon expressing his interest in the subject to Rahwan, the pair began deliberating on potential follow-up work to their paper. They discussed what other factors might influence people’s decisions in trolley-like situations. 

Eventually, Iyad suggested working on a website that would combine all potential factors. The goal was twofold—to collect data about the popular perception of moral decisions taken by machines and to design a public engagement tool that promotes the discussion around the ethics of machines.

The main functionality of the website is the Judge interface, where you are presented with thirteen scenarios representing dilemmas faced by a self-driving car. These dilemmas are inspired by the Trolley problem.

(Image source: Nature.com)

Each dilemma presents two potential negative outcomes, each resulting in loss of lives. The number, gender, age, along with some other factors of surrounding characters and environment in each outcome vary at each occurrence. 

For each scenario, a choice for the preferred outcome has to be made. At the end of the experiment, a summary of decisions taken is presented along with a comparison to others and an optional survey. 

There are other parts of the website which allow users to design their own dilemmas (Design interface) as well as browse dilemmas designed by others (Browse interface). 

Following the deployment of the website, the team added a Classic interface that presents three variants of the classic ‘Trolley problem’.

Learning

The Moral Machine attracted worldwide attention and allowed the team to collect 40 million decisions, in ten languages, from millions of people, in 233 countries and territories.

(Image source: Nature.com)

Based on the moral preferences of their citizens, countries congregate into three clusters: Western, Eastern and Southern. Interestingly, participants showed strong preferences for AVs to spare humans over pets, to spare more lives over fewer lives and to spare younger humans over older humans.

While the general direction of the preferences was universal (e.g., most countries preferred sparing the lives of younger humans over older humans), the magnitude of these preferences varied considerably across countries (e.g., the preference to spare younger lives was less pronounced in Eastern countries).

(Image source: Nature.com)

Differences between countries may be explained by modern institutions and deep cultural traits (e.g., countries with a stronger rule of law have a higher preference for sparing the law-abiding pedestrians at the cost of those flouting road safety laws).

In response to the variety of responses, all of which could be considered moral, Awad explains—

“For many of these tradeoffs, there is no one ideal resolution (or a framework) that all experts agree on. But in most cases, there are multiple ethically defensible solutions that are supported by different groups of experts. This does not mean the answer to your question is easy.

For a long time, we [have] accepted and lived with the idea of having multiple accepted ethical frameworks. But now, with the increasing autonomy of machines, preparing them to take central roles in society, we are forced to make a choice on how these machines should resolve moral tradeoffs. 

The choice of which ethical framework should govern the machine’s decision should be chosen from one of those ethically-defensible, well-thought solutions. [But] which one? Perhaps the one that people like the most, or the one most liked by the elected representatives in charge of making such a decision. 

Now once you make a decision on how machines should resolve moral tradeoffs, the question is how to actually implement it. And that’s a different challenge altogether.” 

Check out more here: The Car That Knew Too Much by Jean-François Bonnefon

Unbiased machines, the North Star

AI systems are presumed to be biased with respect to some parameters. Even when limiting ourselves to one dimension (for instance, gender), there are numerous ways to define ‘bias’ and ‘fairness’ in any given instance. 

In fact, it has been claimed that there are situations where only three sensible and simple definitions of fairness could not be upheld simultaneously by any non-trivial classifier.

Awad says, “This does not mean we should give up on building unbiased machines, but it helps us scope where to focus the work. In fact, some experts believe that fixing machine bias is easier than fixing human bias. But essentially, there is a choice to be made here about what kind of fairness is desirable. This is getting back to moral tradeoffs again.”

There are, of course, less contentious problems of bias—problems that result in clear harm to society in general or groups of minorities. 

Generally speaking, adopting a responsible, reflective approach to developing AI systems can be helpful in mitigating such potential harm and avoiding unintended consequences. Such an approach would engage with a diverse group of stakeholders from the beginning.

“In cases of AI systems prepared to play a big role in society, we can learn from the development of safety-critical systems that use a package of safety procedures such as adopting different layers of safety, and performing iterations of testing and evaluation in controlled environments and using simulations before deployment”, Awad adds.

Moral Machine spin-offs

Edmond Awad says, “The Moral Machine project spurred many follow-up projects that focus on studying the moral behaviour and moral decision-making of humans and machines in different contexts and across different societies and to provide proof-of-concept computational models to implement ethical decision-making in AI-based algorithms.

These projects have inspired me to co-lead a perspective piece with Sydney Levine that proposes a research agenda and a framework titled ‘Computational Ethics.’

We co-wrote the paper with a team of world-leading scholars from different disciplines, including philosophy, computer science, cognitive sciences and social sciences. In it, we propose a computationally-grounded approach for the study of ethics, and we argue that our understanding of human and machine ethics will benefit from such a computational approach.”

Moral Machine has also inspired the methodology for some follow-up projects in the use of websites developed as serious online games with the goal of collecting large-scale data. 

One such project is ‘MyGoodness’, a website that generates charity dilemmas with the goal of identifying the different factors that may influence people to give ineffectively. Awad led the creation of this website with his advisor, Iyad Rahwan, Zoe Rahwan and Erez Yoeli. The project was created in cooperation with The Life you Can Save Foundation

Since its deployment in December 2017, ‘MyGoodness’ has been visited by 250,000 users who have contributed over three million responses. There are other projects in preparation using a similar approach.

More recently, Edmond Awad was co-Investigator on a big EPSRC-funded grant with the goal of investigating and developing the first AI system for air traffic control. 

“Our team, led by Tim Dodwell, is composed of researchers from the Universities of Exeter and Cambridge, The Alan Turing Institute, and NATS, the main provider of air traffic control services in the UK. The project is still at an early stage, but we have already identified challenges and lessons that we plan to share publicly at some point”, Awad reveals.

Researcher’s goalAn informed public engagement

At the end of his discussion about the experiments and their implications, Edmond Awad shared his thoughts about the scope of the research itself; the value of curtailing misinformation; and communicating the implications of such technological and scientific advancements to the public with clarity.

“I would like to think that our role as researchers is to create knowledge. But there is a lot of work that needs to be done to effectively deliver this knowledge to the public. The spread of misinformation and the lack of trust in science in the last few years—especially with the dire consequences during Covid—is an alarm for all academics and researchers that more work should be done in communicating the knowledge we create and in engaging the public in discussions around the societal and ethical considerations of scientific and technological advances”

Edmond Awad, Assistant Professor–Institute for Data Science and Artificial Intelligence, University of Exeter

More Great AIM Stories

Sri Krishna
Sri Krishna is a technology enthusiast with a professional background in journalism. He believes in writing on subjects that evoke a thought process towards a better world. When not writing, he indulges his passion for automobiles and poetry.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM