MITB Banner

The moral dilemma of embedding ethics into autonomous vehicles

AVs are among the first autonomous agents that make judgments with potentially life and death consequences.

Share

“When I die, I want to die like my grandfather who died peacefully in his sleep. Not screaming like all the passengers in his car.” ― Will Rogers. 

Autonomous vehicles (AVs) are one of the most polarising technologies primed for mass adoption. The huge potential of AVs can translate to improved mobility experiences, including lower costs, more relaxation time for drivers, and less pollution. In future, AVs with decision-making capabilities can avert accidents to a greater extent.

As things stand, self-driving cars have a higher accident rate than human-driven cars. Self-driving cars account for 9.1 accidents per million miles driven, as opposed to regular vehicles’ 4.1.

The November 2018 fatal Uber crash is the world’s first death by a self-driving car. The car killed a pedestrian after its automatic emergency braking system failed. The driver was charged with negligent homicide later.

AVs are among the first autonomous agents that make judgments with potentially life and death consequences. Though autonomous driving technology has made exponential strides in the last decade, the ethical side is still stuck in the dicey territory. This is because humans and AVs make ethical judgements that are fundamentally different.

The crash avoidance behaviours are hardwired in humans, and our reflexes kick in within 2 seconds in the face of danger. But can humans be held morally responsible for their involuntary reactions to such situations? On the other hand, AVs are outfitted with sophisticated sensors and algorithms to predict and react to collisions better than human drivers. However, AV judgments don’t take the ethical repercussions into account. 

The moral dilemma

The AV ethical dilemma is an extension of the trolley problem revolving around two main concepts: deontology (a set of principles for determining good and bad); and utilitarianism (a system of rules for determining good and bad based on outcomes). However, many researchers dismiss AV ethics based on the trolley problem for the following reasons:

• Hypothetical scenarios built over the trolley problem are simplistic and vague. Most AV moral dilemma situations focus primarily on the outcomes of specified binary options, such as the number or characteristics of people affected. Other crucial AV crash-related elements, including rules, duties, and moral norms, are ignored in this approach.

• The results are almost skewed. Trolley problem-solving scenarios frequently begin by favouring a particular moral theory, resulting in a skewed interpretation of the outcomes. Due to this, studies have revealed a disparity between people’s preferences and acceptance of utilitarian AVs. For example, many like the idea of utilitarian AVs that save more lives but prefer buying AVs that put their safety first.

• The trolley problem has no level playing field. The choice to kill is predicated on the social mores and the characteristics of the parties involved (e.g., saving women and killing men), and is dismissive of the equal right to life. Further, discrimination based on personal characteristics has no legal standing in many countries. As a result, AI’s biases tend to outrage the masses and, in turn, stand in the way of mass adoption of AVs.

AV ethics based on trolley problems is dogmatic since it relies on a single moral doctrine (e.g., utilitarian). However, human morality is pluralistic, which calls for alternative AV ethical ideas to utilitarianism.

Time for a new approach

The limitations of the trolley problem-based AV ethics can be addressed by incorporating multiple crash contexts and human values. This entails explaining AV moral behaviours to ensure AV systems are transparent. One method to accomplish this is to create an AV framework that explains and forecasts the entire ethical decision-making process based on the values of end-users.

Inarguably, a coordinated and interdisciplinary effort from the technological, regulatory, and social spheres is the need of the hour. It’s critical for stakeholders across the domain (such as AV developers, engineers, regulators, ethicists, and social scientists) to have an open dialogue about establishing value-aligned moral behaviours in AV.

Artificial intelligence is a fledgeling field, and we have to account for the unknown unknowns before we codify AV ethics. Most AVs are developed by engineers, transportation specialists, policymakers, and AI ethicists and the likelihood of their biases creeping into the AV tech stack is indeed high. The drivers’ moral judgement plays a huge role in their decision making, and should be accounted for while building AI models for AVs. Additionally, ethical decision-making must explain both intuitive and rational aspects of AV’s ethical behaviours. To solve this, researchers recommend using the dual-process theory of moral reasoning to explain and predict pluralistic moral reasoning. 

We need to develop descriptive ethics to understand AV moral dilemma better. Since we don’t have a lot of AV crash data and no consensus on the AV ethics code just yet, we are in no position to build ethically conscious AI models for AVs. It will be difficult to change AV decisions and policies once established. Meaning there is no margin for error when designing normative ethical principles for AVs.

Share
Picture of Sri Krishna

Sri Krishna

Sri Krishna is a technology enthusiast with a professional background in journalism. He believes in writing on subjects that evoke a thought process towards a better world. When not writing, he indulges his passion for automobiles and poetry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.