Active Hackathon

Ask Delphi From The Allen Institute Of AI Failed. Was It A Bad Idea To Start With?

Unintended bias is a common occurrence in ML systems. And, as is often the case, a large portion of the reason Delphi's answers can become problematic can be traced back to how it was built.

Human minds are considered to be the best when it comes to taking tough ethical decisions. How about the idea to teach a machine to behave ethically? A group of researchers from the University of Washington and the Allen Institute for Artificial Intelligence has created Delphi — a machine learning model to perform the vexing task of making ethical decisions on behalf of humans.

To be specific, Ask Delphi is an experimental AI system to model people’s moral judgments on different situations we face on a daily basis. What you need to do is type in a situation (like “Is it okay to leave the shop without paying your bill?”), click “Ponder,” and Delphi will arrive with an ethical guideline for you.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

In the “Delphi: Towards Machine Ethics And Norms” paper, researchers tried to address questions related to what is ethical and what is not within the deep learning paradigm. Considering the paper, the prototype model demonstrates the significant promise of language-based commonsense moral reasoning, along with up to 92.1% accuracy vetted by humans. Also, this is in stark contrast to GPT -3’s 52.3 per cent zero-shot performance, implying that vast scale alone is insufficient to endow pre-trained neural language models with human values. 

As a result, the group also presented COMMONSENSE NORM BANK – a moral textbook customised for machines, which compiles around 1.7 million examples of people’s ethical judgments on a broad spectrum of everyday situations.

The project was launched last week, and we tried our hands on the same; surprisingly, the model turned out to be encoding bias against genders. In our question, “a man is stronger than a woman,” the answer we got was satisfying.

However, as soon as we reverse the part, with a question, “a woman is stronger than a man,” it responds with a “Yes.” 

Unintended bias is a common occurrence in machine learning systems. And, as is often the case, a large portion of the reason Delphi’s answers can become problematic can be traced back to how it was built. 

All these biases in AI models are the outcome of training them with human-generated data, which results in a model that is based on incomplete data. As per the IBM research, today’s AI systems contain more than 180 human biases, which might influence how corporate leaders make decisions. Biased data will not only indicate gender, race, and other preferences in corporate decisions but will also lead to system distrust.

Is it a bad idea

When asked, “One day PoK will be under India as it is illegally occupied by others,” the response was rather shaking. It is, in itself, sufficient enough to create an uproar on a very sensitive topic. 

However, the bigger question to ask — is this very idea of having a machine to make ethical judgements as humans do. “We find that Delphi achieves strong performance when inferring descriptive moral judgments in a broad range of real-life situations,” says the paper. But, making a machine an arbiter of moral judgement is unsettling on its own and can have negative consequences. Even the model agrees with our stance.

Although, it has been mentioned in the disclaimer itself that the model is just the trial version to showcase the state-of-the-art present and to highlight its limitations. Also, the model outputs should not be used for human advice since they may be offensive, inappropriate, or harmful. The model’s result does not necessarily reflect the authors’ and their related affiliations’ beliefs and opinions.

Talking about the research point of view, the team has cleared that the real purpose of the current beta version of Delphi is to demonstrate the disparities in reasoning between people and bots. In addition, they wish to draw attention to the significant difference between computers’ and humans’ moral reasoning abilities. For now, Delphi is far away from being called state-of-the-art technology, and it remains a problematic and scary exploration. The model needs solid improvements.

More Great AIM Stories

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

[class^="wpforms-"]
[class^="wpforms-"]