The Ethical Challenges Of AI In Defence

Ethical implications of using AI in defence have been raised by policymakers and activists alike.

The strength of its military is often an indicator of how powerful a country is. In some of the most developed countries, investment in this sector is the highest. A large part of this investment is utilised to research and develop modern technology such as AI in military applications. AI-equipped military systems are capable of handling volumes of data efficiently and have superior computing and decision-making capabilities.

That said, in the case of defence, the implications of every decision have to be weighed in very carefully. Artificial intelligence is still in the adolescent stage, and the practical applications of the technology are often brittle. More often than not, ethical implications of using AI in defence have been raised by policymakers and activists alike. 

Controversy around AI in defence

The chief concern of using AI in defence and weaponry is that it might not perform as desired, leading to catastrophic results. For example, it might miss its target or launch attacks that are not approved, lead to conflicts. 

Most countries test their weapons systems reliability before deploying them in the field. But AI weapon systems can be non-deterministic, non-linear, high-dimensional, probabilistic, and continuously learning. For testing a weapon system with such capabilities, traditional testing and validation techniques are insufficient. Furthermore, the race between the world’s superpowers to outpace each other has also made people uneasy as countries might not play by the norms and consider ethics while designing weapons systems, leading to disastrous implications on the battlefield.

Technical Challenges

As defence starts leaning towards technology, it becomes imperative that we evaluate the loopholes of AI-based defence technologies that bad actors might exploit. For example, adversaries might seek to misuse AI systems by messing with training data or figuring out ways to gain illegal access to training data by analysing the specifically tailored test inputs. 

Furthermore, the AI black box and the resulting lack of explainability would open it up for risk in its application in highly regulated or critical environments.

The opponent can craft an attack in a method similar to training machine learning. For example, instead of training the model on a designated dataset, it could be trained on errors to give false results every time it is used. In addition, several other operational risks arise from the reliability, fragility, and security of AI systems.

From humanitarian standpoint

UN chief António Guterres once said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

Another issue is that strategic risks with the possibility of AI increase the likelihood of war globally and further escalate ongoing conflicts. There can be no answer to this question until states reach such a stage of AI. 

Humanitarians have always advocated against the deployment of such technologies in the field. Despite extensive efforts to ban the technology in the United Nations, it is unlikely that a complete ban can be enforced. The best way forward is to define a set of broad guidelines for its deployment to secure the world.

To begin with, AI alone should never be allowed to make judgement calls in matters of arms. There should be human surveillance of its decisions before they are executed in the field. In addition to that, persons entrusted with deploying AI must have a thorough knowledge of this tech.

 Furthermore, it should be governable. Humans should have sufficient oversight and the ability to disengage a malfunctioning system immediately. 

“If you want to do strategy planning, then you’re gonna have a mashup of machine learning with, maybe, game theory and a few other elements,” said William Scherlis, director of the Information Innovation Office at the Defense Advanced Research Projects Agency of the United States.

More Great AIM Stories

Meenal Sharma
I am a journalism undergrad who loves playing basketball and writing about finance and technology. I believe in the power of words.

More Stories

OUR UPCOMING EVENTS

8th April | In-person Conference | Hotel Radisson Blue, Bangalore

Organized by Analytics India Magazine

View Event >>

30th Apr | Virtual conference

Organized by Analytics India Magazine

View Event >>

MORE FROM AIM
kumar Gandharv
Actions Taken For Ethical AI In 2021

Countries are investing heavily in AI research and development. However, there are currently no norms or standards in place for ethical AI research, design, or use.

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM