Now Reading
Is It Hyperbolic To Compare Artificial Intelligence To Nuclear Weapons?


Is It Hyperbolic To Compare Artificial Intelligence To Nuclear Weapons?


As development in the field of artificial intelligence witnesses revolutionary changes, several noted names and visionaries have warned the world about the grave consequences of AI. In fact, the reliance of AI is being likened to something as lethal as nuclear weapons.



Reiterating his position on the dangers of AI, founder and former CEO of Microsoft, Bill Gates, in a recent interaction at Stanford University compared AI to nuclear weapons. Stating that technology to be both promising and dangerous Gates said, “The world hasn’t had that many technologies that are both promising and dangerous. We had nuclear weapons and nuclear energy, and so far so good.”

He then went on to add that the focus of AI should be about helping people get medical care and said that the technology should be leveraged to identify promising drugs and improve development drug development process,“I do not believe without machine learning techniques we would able to take the dimensionality of this problem to find the solution,” Gates added.

Why Are Big Names In The Industry Distrustful Of AI?

Gates alone isn’t the only big names among tech entrepreneurs to make such a claim, back in 2017, Elon Musk in one of his early Twitter musings warned the world that it is AI and not Kim Jong Un more dangerous to the world. Musk has been an open critic of the potential harm of the technology and has called out on Facebook’s many initiatives that explore AI.

His scepticism led to the creation of OpenAI, a platform that was formed as a counterbalance to promote ethical AI, however, in February 2018, Musk left the group owing to differences in opinion with fellow members. Almost a year later, he took to Twitter to explain his position, “I had to focus on solving a painfully large number of engineering and manufacturing problems at Tesla (especially) and SpaceX.  Also, Tesla was competing for some of the same people as OpenAI and I didn’t agree with some of what OpenAI team wanted to do. Add that all up and it was just better to part ways on good terms,” Musk Tweeted.

Interestingly, the technology didn’t escape the criticism from some of the prominent names from India which includes Infosys co-founder, Narayana Murthy and Sashi Tharoor, who stated that the fear of job associated with the technology to be taken into consideration very seriously.

Is AI As Lethal As Nuclear Weapons

As the scepticism around the technology grows, experts believe that the technology can help in de-escalation of nuclear weapons and could also lead to potential nuclear destruction, but the challenge lies limiting the existing weak AI system to do only certain jobs.

In a paper titled, “How Might Artificial Intelligence Affect The Risk of Nuclear War,” authors Edward Geist and Andrew J Lohn warns that as AI makes its way into weaponry the chances of nuclear warfare is highly likely. “The effect of AI on nuclear strategy depends as much or more on adversaries’ perceptions of its capabilities as on what it can actually do. For instance, it is extremely technically challenging for a state to develop the ability to locate and target all enemy nuclear-weapon launchers, but such an ability also yields an immense strategic advantage. States, therefore, covet this capability and might pursue it irrespective of technical difficulties and the potential to alarm rivals and increase the likelihood of conflict,” the researchers elaborate.

See Also
gan

They further point out that much of the challenge lies with the advent of superintelligence, “With superintelligence, AI would render the world unrecognisable and either save or destroy humanity in the process,” they note.

The researchers then suggest a time-frame in the future where the AI system can influence or trigger nuclear destruction. By 2040, though AI’s role will be limited to that of a decision support system, the researchers say that it can influence humans in escalation matters. “Without being directly connected to the nuclear launchers, an AI could still provide advice to humans on matters of escalation. It seems reasonable that such a capability, at least for some aspects of the decision-making process, could be achieved by 2040 given the progress AI is making in increasingly complex and poorly specified task,” explain the authors.

Echoing the same view, Dr Vincent Boulanin is a Senior Researcher at SIPRI in a blog post points out that AI’s penetration into modern weaponry could lead to a nuclear war like situation. Stating that adoption of AI into modern weapons by nuclear-armed states might encourage other states to consider the same and further destabilising measures imposed by nuclear-powers could increase the chances of a nuclear conflict ,"This could include entering into an arms race, doubling down on the modernisation of nuclear arsenals, renouncing a ‘no first use’ policy, increasing alert statuses, or further automating nuclear launch policies,” he writes.

The only way out to avoid such a catastrophe is by encouraging meaningful dialogue between all the stakeholders, Boulanin points out, “A commitment to lower the alert status of nuclear arsenals, as well as more openness about nuclear modernisation plans and information-sharing via different dialogue tracks are measures that could clearly help to start mitigating the destabilising potential of nuclear-related AI applications.



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Our annual ranking of Artificial Intelligence Programs in India for 2019 is out. Check here.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top