Active Hackathon

Elon Musk Urges UN For An Outright Ban On Killer Robots, 116 Founders Join In

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

Elon Musk, Google DeepMind co-founder Mustafa Suleyman were part of the signatories of the letter made public on Monday.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” the letter said.

A key organiser of the letter, Toby Walsh, Professor of Artificial Intelligence at the University of New South Wales in Sydney, released it at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, the world’s pre-eminent gathering of top experts in artificial intelligence (AI) and robotics. Walsh is a member of the IJCAI 2017’s conference committee.

The experts call autonomous weapons “morally wrong,” and hope to add killer robots to the U.N.’s list of banned weapons that include chemical and intentionally blinding laser weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

Musk has been very vocal about the inherent risks of artificial intelligence.

In a July 15 speech at the National Governors Association Summer Meeting in Rhode Island, Musk said the government needs to proactively regulate artificial intelligence before there is no turning back, describing it as the “biggest risk we face as a civilization.”

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he had said. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

The open letter, which was signed by representatives from companies worth collectively billions of dollars across 26 countries, could put even more pressure to make a prohibition happen.

More Great AIM Stories

Priya Singh
Priya Singh leads the editorial team at AIM and comes with over six years of working experience as a journalist across broadcast and digital platforms. She loves technology and an avid follower of business and startup news. She is also a self-proclaimed baker and a crazy animal lover.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter

The curious case of Google Cloud revenue

Porat had earlier said that Google Cloud was putting in money to make more money, but even with the bucket-loads of money that it was making, profitability was still elusive.

[class^="wpforms-"]
[class^="wpforms-"]