MITB Banner

Most AI Doomers Have Never Trained An ML Model in Their Lives 

Everyone now compares AI to atomic bomb and talks about AI ethics.
Share
Most AI Doomers Have Never Trained An ML Model in Their Lives 
Listen to this story

Isaac Asimov, a mid-20th century science-fiction writer known for his Robot series, introduced the “Three Laws of Robotics” which play an essential role in discussions of ethical AI

The first one states that, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The second one states, “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.” And the third law states that “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” 

According to the 5 days long Artificial Intelligence Ethics programme by University of Oxford, these three laws are one of the focuses of the course. Interestingly, the program does not require detailed knowledge of AI/ML or how these systems work before governing and drawing ethical boundaries around them. It appears that anyone with any qualification can take this course and tout themselves as an ‘AI Ethicist”.

This raises the question: If a person hasn’t built a single ML model in their life, what qualifies them to put guardrails around these highly capable, even though scary systems?

AI Doomers are AI Boomers

Putting in the perspective of the AI doomers, a lot of them are influenced by all the movies that have been released all these years about “machines taking over the world.” It is clear they fear that the systems big-techs are developing that are increasingly being touted as getting towards sentience could end up taking over humanity. 

However, even if someone like this takes a course on AI ethics without learning how machines learn, what qualifies them to make laws about AI?

Recently, the Israeli historian and philosopher Yuval Noha Harari expressed his skepticism about the possibilities of developing AI models like ChatGPT. “In the future, we might see the first cults and religions in history whose revered texts were written by a non-human intelligence,” said Harari. Seems like a far-fetched idea. 

Warren Buffet has also shown his worry about the dangers of AI and compared it to the creation of the atomic bomb. Even the Pope called for ethical use of AI.

Interestingly, Geoffrey Hinton after leaving Google is also concerned about ethical implication of AI. In 2018, he dismissed the need for explainable AI and also was in disagreement with Timnit Gebru, a former AI ethicist at Google, over the existential risks that these LLMs possess. But even then, he used quote Oppenheimer when speaking about this topic. In the past, Sam Altman has also compared the potential of the technology he is trying to develop with the atomic bomb

On the other hand, when the AI experts like Hinton or Yann LeCun, who have been the godfathers of AI and are in this field since the beginning, raise concerns about the capabilities of these AI models, then probably the conversation starts to get interesting, and the questions around ethics start stirring up. 

Hinton’s most important reason for leaving Google was to engage in conversations regarding the ethical implications of these AI models. He also in hindsight regrets building these models, and said that he should have started speaking about these dangers sooner rather than now.

Still Not on the Same Page

Last week, since the heads of Microsoft, Google, and OpenAI met with the Biden administration at the White House, there is increasing talk about the ethical implications of these products. Though there is no source to know what they spoke about, it might be about putting up responsibility on the leaders to make the AI ethical. 

On the flip side, ever since the AI chatbot race started, the companies’ behind these “bullshit generators” started laying off their ethical and responsible AI teams. It seems as though the big-tech has found out that they do not actually require an ethical team to build guardrails around their products. The possibility is that an ethics’ concerned person on the team might hinder or question the steps and growth that the company is making with its product. 

Moreover, when big-tech is trying to get ahead of the other one in the AI race, they might overlook the ethical part of these models. There is a possibility that the tech giants might now come on the same page as the governments about the concerns about it. 


Before getting convinced with this statement, it is also important to understand that AI being ethical is an important aspect for big-tech as well. The problem arises when the ethical teams get fixated on solving the biases in systems instead of making them “safe”. To put it in the words of Elon Musk from the BBC Interview, “Who decides what is hateful content?”. But interestingly enough, Musk is one of the top voices who called for a pause on training giant AI experiments and is now building his own AI systems to rival OpenAI

Even the creator of ChatGPT, OpenAI, has been increasingly vocal about the fears they have around these AI models. In an interview with ABC News, Altman said, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.” Altman believes that as long as the technology is in his hands, it is safe and will try to remain ethical. He also said that society has a limited time to adapt and put safety limits on AI.

Coming back to the layoffs of the ethical teams from big-tech, it might be safe to say that they want to win the AI race instead of building “ethical” robots. Or maybe the case is just that the people who were laid off weren’t too aligned with the company they were working for. Who knows who is in the right?

Not All Ethicists Restrict AI

To clear the wheat from the chaff, it would be wrong to say that all AI ethicists do not understand about the AI systems and how they work. Ethicists like Timnit Gebru or Alex Hanna, who have been part of the big-techs building these AI systems are working towards solving the AI alignment problem at Distributed AI Research Institute (DAIR). 

“A space for independent and community-rooted AI research,” DAIR deals with addressing the bias problems within these systems while also talking about the whole framework of how these models might be pervasive to the privacy of the users and citizens of the world. Maybe Gebru and Hanna parted ways from Google after looking at some serious ethical concerns.

Moreover, there is a new breed of ethicists in the field of AI that talk about the rights of the AI. This goes with the third rule of robotics from Asimov, where robots can govern themselves. Jacy Reese Anthis, co-founder of Sentience Institute, when speaking with AIM said that we need an AI rights movement, “even if they currently have no inner life, they might in the future.” Clearly, the conversation is moving in the right direction.

This shows that maybe the only thing missing in the current stream of AI ethicists is the lack of AI knowledge and more of sociological understanding of the world. While the latter is immensley important, the missing of the former makes their stances get overlooked and dismissed. Big-tech needs more ethicists, with more knowledge about how the AI systems work. When that is the case, maybe we would be able to make AI “ethical”. Till then, the big-tech makes the move.

PS: The story was written using a keyboard.
Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India