An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.
Elon Musk, Google DeepMind co-founder Mustafa Suleyman were part of the signatories of the letter made public on Monday.
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” the letter said.
Over 100,000 people subscribe to our newsletter.
See stories of Analytics and AI in your inbox.
A key organiser of the letter, Toby Walsh, Professor of Artificial Intelligence at the University of New South Wales in Sydney, released it at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, the world’s pre-eminent gathering of top experts in artificial intelligence (AI) and robotics. Walsh is a member of the IJCAI 2017’s conference committee.
The experts call autonomous weapons “morally wrong,” and hope to add killer robots to the U.N.’s list of banned weapons that include chemical and intentionally blinding laser weapons.
In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.
Musk has been very vocal about the inherent risks of artificial intelligence.
In a July 15 speech at the National Governors Association Summer Meeting in Rhode Island, Musk said the government needs to proactively regulate artificial intelligence before there is no turning back, describing it as the “biggest risk we face as a civilization.”
“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he had said. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
The open letter, which was signed by representatives from companies worth collectively billions of dollars across 26 countries, could put even more pressure to make a prohibition happen.