Advertisement

Watch Out: AI May Actually Be Able to Wipe Out Humanity

Scientists from the University of Oxford and affiliated with Google DeepMind have released a paper that explores the possibility of super-intelligent agents wiping out humanity
Listen to this story

The most popular trope in science fiction has been robots taking over the world. What seemed like fiction then, is slowly being feared as the inevitable reality in the near future owing to the break-neck pace at which AI is progressing. Stephen Hawking, in a 2014 interview with WIRED, said, “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

Looks like these fears are not totally unfounded. 

Scientists from the University of Oxford and affiliated with Google DeepMind have released a paper that explores the possibility of superintelligent agents wiping out humanity.

Reward at any cost

For this study, the researchers considered ‘advanced’ agents – referring to agents that can effectively select their outputs or actions to achieve high expected utility in a wide variety of environments. They selected an environment as close as possible to the real world. In such a scenario, since the agent’s goal is not a hard-coded function of its action, it would need to plan its activities and learn which actions serve them in attaining their goal.

The researchers show that an advanced agent who is motivated by a ‘reward’ to intervene is likely to succeed – more often than not, with catastrophic results. When the agent starts interacting with the world and receiving percepts to learn more about its environment – there are innumerable possibilities. The scientists argue that a sufficiently advanced agent would thwart any attempt (even the ones made by humans) to prevent it from attaining the said reward. 

“One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer,” the paper says, further adding, “Proper reward-provision intervention, which involves securing reward over many timesteps, would require removing humanity’s capacity to do this, perhaps forcefully.”

As per the paper, life on Earth will turn into a zero-sum game between humanity. Advanced agents would try to harness all available resources to grow food and avail other necessities and protect against escalating attempts to stop it.

Read the full paper here.

Real threat or exaggeration

In a 2020 interview with The New York Times, Elon Musk had said that AI is likely to overtake humans. He said that artificial intelligence will be much smarter than humans and will overtake the human race by 2025. He strongly believes that AI will wipe out humanity and has time and again said that it will destroy humanity without even thinking about it.

In 2018, while speaking at the South by Southwest (SXSW) tech conference in Texas, he had said that AI is far more dangerous than nukes. He also added that there is no regulatory body overseeing its development, which is insane. He had earlier said that while humans will die, AI will be immortal. It will live forever. He calls AI “an ‘immortal dictator’ from which we can never escape”. 

We discuss Musk’s concern here because he was one of the investors in Deepmind. Interestingly, during this interview, Musk expressed his ‘top concern’ with Google’s DeepMind, saying, “Just the nature of the AI that they’re building is one that crushes all humans at all games.”

On November 14, 2014, Elon Musk posted a message on a website called Edge.org. He wrote that at AI research labs like DeepMind, artificial intelligence was improving at an alarming rate: “Unless you have direct exposure to groups like DeepMind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. Ten years at most. This is not a case of crying wolf about something I don’t understand. I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognise the danger but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the internet. That remains to be seen. . . .”

The message was deleted shortly after.

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR