MITB Banner

The Ethical Challenges Of AI In Defence

Ethical implications of using AI in defence have been raised by policymakers and activists alike.

Share

The strength of its military is often an indicator of how powerful a country is. In some of the most developed countries, investment in this sector is the highest. A large part of this investment is utilised to research and develop modern technology such as AI in military applications. AI-equipped military systems are capable of handling volumes of data efficiently and have superior computing and decision-making capabilities.

That said, in the case of defence, the implications of every decision have to be weighed in very carefully. Artificial intelligence is still in the adolescent stage, and the practical applications of the technology are often brittle. More often than not, ethical implications of using AI in defence have been raised by policymakers and activists alike. 

Controversy around AI in defence

The chief concern of using AI in defence and weaponry is that it might not perform as desired, leading to catastrophic results. For example, it might miss its target or launch attacks that are not approved, lead to conflicts. 

Most countries test their weapons systems reliability before deploying them in the field. But AI weapon systems can be non-deterministic, non-linear, high-dimensional, probabilistic, and continuously learning. For testing a weapon system with such capabilities, traditional testing and validation techniques are insufficient. Furthermore, the race between the world’s superpowers to outpace each other has also made people uneasy as countries might not play by the norms and consider ethics while designing weapons systems, leading to disastrous implications on the battlefield.

Technical Challenges

As defence starts leaning towards technology, it becomes imperative that we evaluate the loopholes of AI-based defence technologies that bad actors might exploit. For example, adversaries might seek to misuse AI systems by messing with training data or figuring out ways to gain illegal access to training data by analysing the specifically tailored test inputs. 

Furthermore, the AI black box and the resulting lack of explainability would open it up for risk in its application in highly regulated or critical environments.

The opponent can craft an attack in a method similar to training machine learning. For example, instead of training the model on a designated dataset, it could be trained on errors to give false results every time it is used. In addition, several other operational risks arise from the reliability, fragility, and security of AI systems.

From humanitarian standpoint

UN chief António Guterres once said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

Another issue is that strategic risks with the possibility of AI increase the likelihood of war globally and further escalate ongoing conflicts. There can be no answer to this question until states reach such a stage of AI. 

Humanitarians have always advocated against the deployment of such technologies in the field. Despite extensive efforts to ban the technology in the United Nations, it is unlikely that a complete ban can be enforced. The best way forward is to define a set of broad guidelines for its deployment to secure the world.

To begin with, AI alone should never be allowed to make judgement calls in matters of arms. There should be human surveillance of its decisions before they are executed in the field. In addition to that, persons entrusted with deploying AI must have a thorough knowledge of this tech.

 Furthermore, it should be governable. Humans should have sufficient oversight and the ability to disengage a malfunctioning system immediately. 

“If you want to do strategy planning, then you’re gonna have a mashup of machine learning with, maybe, game theory and a few other elements,” said William Scherlis, director of the Information Innovation Office at the Defense Advanced Research Projects Agency of the United States.

Share
Picture of Meenal Sharma

Meenal Sharma

I am a journalism undergrad who loves playing basketball and writing about finance and technology. I believe in the power of words.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.