AI Researchers are the Architects of their Own Destruction

Oh the irony! To build something and then sell the fear of it.
AI Researchers are the Architects of their Own Destruction

To be scared of something you’ve built is probably the worst of all fears. It comes along with guilt and resentment of what you have done. This is the case with a lot of pioneering AI researchers such as Geoffrey Hinton and Yoshua Bengio, who are talking about abandoning their lives’ work to warn about the dangers of AI.

Bengio told the BBC that he feels “lost” over his life’s work. He is the second “godfather” of AI to come up with fears and ask for regulation of AI. Hinton was the first one to leave Google to warn the world about the problems of the pace of development of such AI technology. He claims that AI is “an existential risk”.

A protester outside a London event at which Sam Altman spoke | Getty Images

Overselling fear

Last week, hundreds of AI scientists and researchers joined in to warn that the future of humanity is at risk. The Statement on AI Risk, signed by Hinton and Bengio, also included OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Stability AI’s Emad Mostaque, among others. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the letter. Essentially, most of the people concerned about developing technology are concerned about what happens if the technology falls into the hands of “bad actors” which includes the military and terrorist organisations. 

When the experts themselves come out to point out the risks of the technology, it might be tempting and too easy to fall into the fears. On the flip side, the people who are blooming with the AI revolution are now voting to increase developments in AI. That is also probably because making AI models like ChatGPT has now become easier with open source models. Take for example, the recent debate about the need for uncensored LLMs that are outperforming bigger models. 

This can also be one of the reasons why the AI experts are now becoming AI doomsayers. They are increasingly getting scared of letting the technology be in the hands of people who are not so well versed with the possibility of it going rogue. 

But on the contrary, the third godfather of AI, Yann LeCun’s reaction to AI researchers prophesying the doomsday scenario is face palming. LeCun, the Meta AI chief, has been quite vocal when criticising people who are overestimating the capabilities of the developing generative AI models such as ChatGPT and Bard.

Another AI expert to call out the AI experts who have become AI doomers is Kyunghyun Cho, who is regarded as one of the pioneers of neural machine translation which led to the development of Transformers by Google. Cho said in a media interview that glorifying “hero scientists” is like taking people’s words as the Gospel truth without rationalising the meaning behind them.

He expressed frustration towards the developing AI discourse and how the US Senate hearing was disappointing as it did not talk about the benefits of AI, but only the problems related to it. “I’m disappointed by a lot of this discussion about existential risk; now they even call it literal ‘extinction’. It’s sucking the air out of the room,” he said.

Losing Credibility

The point is, a person who has never ever deployed a single ML model in their life, it is very easy for them to become an AI doomer. They are often influenced by the hundreds of movies made about machines taking over the world. But now, if the so-called AI experts join the same narrative, with the same explanation, it seems like it is merely fear mongering, and not an actual reason.

Take for example Elon Musk, the longest running AI doomer hasn’t stopped yet. In a recent interview with the Wall Street Journal, Musk said, “I don’t think AI is going to try to destroy all humanity, but it might put us under strict controls.” He still thinks there is a “non-zero chance” of AI going full Terminator on humanity. 

Earlier, the petition to pause giant AI experiments and not train models beyond GPT-4 was welcomed by a lot of AI researchers including Musk and Steve Wozniak. This new petition seems a lot similar to that one, only this time it includes Altman. This suggests a lot of possibilities other than actual fear of technology taking over humanity. 

Altman had asked the government to regulate AI development. At the same time, he also holds the opinion that OpenAI should be evaluated differently. This definitely questions the company’s motivation behind enforcing government regulations on AI as the regulation is a clear conflict of interest for Altman. 

We all know that fear sells. “Unfortunately, the sensational stories read more. The idea is that either AI is going to kill us all or AI is going to cure everything — both of those are incorrect,” explained Cho.

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox