How The Tech Community Is Leading The War Against Development Of Lethal Autonomous Weapons

Of late, there have been many arguments against killer robots and lethal autonomous weapons but none have been more potent than the recent news about more than 2,400 technology leaders calling for an open ban on the development of lethal autonomous weapons. One of the biggest voices in this debate has been Tesla’s Elon Musk, who has rallied against the use of killer robots. In a recent conference in Stockholm more than 2,400 individuals and 150 companies from 90 different countries vowed to play no part in the construction, trade, or use of autonomous weapons in a pledge signed on Wednesday at the 2018 International Joint Conference on Artificial Intelligence in Sweden.

Max Tegmark, president of the Future of Life Institute and one of the supporters of the ban against development of LAWS, “AI has huge potential to help the world — if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way”. According to a statement released by the body, the decision to take a human life should never be delegated to a machine since LAWS engaging targets without human intervention – would be dangerously destabilising for every country and individual.

If autonomous weapons are the Kalashnikovs of tomorrow — the tech community should abandon it

AI military tech has raised concerns in the past few months wherein Google employees reportedly threatened to quit over military drone project. Similarly, employees at Microsoft posted a letter to CEO over the use of technology by law enforcement agencies. Now as AI researchers and communities across the globe vow not to invest in the development of LAWS, the debate clearly puts the onus on the tech community to counter the proliferation and threat posed by autonomous weapons. Leading organisations such as XPRIZE Foundation, Element AI, Swedish AI Society, European Association for Artificial Intelligence to Swedish AI Society to GoodAI among others signed the pledge against use of AI. Besides the tech community has realised the urgent opportunity and necessity to join the debate which so far involved policymakers and government leaders.


Sign up for your weekly dose of what's up in emerging technology.

It’s not just the ethical questions around LAWS that have raised eyeballs, there are deeper concerns about autonomous weapons being hacked in addition to them ending up in the hands of terrorists. As part of the pledge, tech companies are vowing not to participate or support the development, manufacture and trade of lethal autonomous weapons. What has sparked more concern is that the international community, including policymakers, government leaders and tech community does not have any governance framework or technical tools to prevent this into a spiraling arms race.

Understanding the three levels of autonomy in LAWS

Weapons with human-in-the-loop systems: Most LAWS that are deployed today require a human-in-the-loop element which means that a human command is required for the target and deployment of force. For example, Israel’s Iron Dome system is an example of this level of autonomy.

Download our Mobile App

Weapons with human-on-the-loop system: This gives the autonomous weapon partial autonomy wherein the robot can supersede human’s decision in the selection of target. Case in point includes South Korea that has planted a sentry robot along the demilitarised zone joining North Korea. According to a report from Human Rights Watch, United States, Russia, China, Israel, South Korea and the UK possess partially autonomous weapons systems such as armed drones.

Fully autonomous weapons: Fully autonomous weapons operate without requiring any human input.  For example, a report in IEEE hints that drones are by and large remotely controlled, but hobbyist drones are becoming increasingly autonomous. Some of the new models of drones can easily navigate to a fixed target on their own and even track moving objects. Increasingly, there are reports about small drones being equipped with facial recognition technology which are leveraged for searching people autonomously.

Here are some of the reasons for concerns:

  • Tech companies and governments are manufacturing micro-drones which in turn can be used for spying, surveillance purposes
  • As of now, there are no effective guidelines or policy framework against the defense of drones
  • There is no defense framework to keep military-grade weapons from falling into the hands of terrorists or being misused

Rise of counter-force weapons

In his book Army of None: Autonomous Weapons and the Future of War, Paul Scharre, also a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security mentioned that governments across the world, Russia, US and China are building counter-force weapons to fight other militaries. This raises even more concerns since counter-force autonomous weapons spark concerns about risk and the factor of controllability.

More Great AIM Stories

Richa Bhatia
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox