The US Air Force, in a recent attempt, used artificial intelligence on a Lockheed U-2 spy plane in a training flight, for controlling its sensors and navigation systems. This is believed to be the first of such an attempt where artificial intelligence has been used in US’ military aircraft. Although the plane was steered by a human pilot with no weapons involved, experts in the defence sector believe this to be a watershed moment in defence as well as a subject of intense debate in arm control communities.
When asked about the attempt, Assistant Air Force Secretary Will Roper stated that such an endeavour of leveraging AI sagely in the US military system gives rise to a “new age of human-machine teaming and algorithmic competition.” As a matter of fact, he believes that failing to realise the full potential of artificial intelligence will lead to “ceding decision advantage to our adversaries.”
According to the sources, the AI system used for the spy plane was deliberately designed without a manual override to test its capability in the test environment. However, it was relegated to highly specific tasks and was separated from the plane’s flight controls. Because of the sensitive nature of the work, the Air Force kept the name of the pilot anonymous, sharing only his call sign — ‘Vudu.’
The pilot who carried out the test stated to the media that technically he was the pilot in command, and the role of artificial intelligence was relatively narrow. However, “for the task the AI was designed, it performed well,” the pilot stated.
The AI algorithm, on the other hand, was dubbed ARTUµ, an apparent Star Wars reference, was responsible for sensor employment and tactical navigation, according to the news released by the Air Force.
Further, it was explained that the AI system was trained to look for incoming missiles and missile launchers, as an initial test flight. Also, while directing the plane’s sensors, the AI was designed to have the last call.
This initiative aimed to bring the Air Force as well as the military sector closer to machines where robots will be responsible for technical tasks with direct control of humans. Roper stated that the human, in the end, will be in control of the life-or-death decisions like flight control and targeting.
ARTUµ was developed based on open-source software algorithms — µZero, which was developed by the AI research company DeepMind for strategic games like Chess and Go, and was adapted by the U-2 Federal Laboratory. ARTUµ was further made publicly available through a Google-developed Kubernetes, which enabled the AI system to work with the plane’s onboard computer systems.
Traditionally, the Lockheed U-2 spy plane hasn’t been developed for an AI-enabled flight, instead was built in the early 1950s for the CIA to use it in Cold War surveillance from staggeringly high altitudes of 60,000 or 70,000 feet. However, in the current scenario, the planes were procured by the Defense Department.
Considering the planes were known for its surveillance work, it has been well equipped with AI to analyse complex data. The programme — Project Maven by the Air Force, was created to rapidly analyse data from drone footage, but due to backlash from its employees, Google declined to renew the same. The tech giant later released a set of AI principles that forced it to disregard the company’s algorithms from being used in any weapons system.
With that being said, Eric Schmidt, who led Google until 2011, believes it’s tough for the US military to fully embrace autonomous weapons anytime soon. This is just because of the uncertainty AI faces while performing in the possible scenarios, “including those in which human life is at stake.” According to him, while humans killing civilians by mistake is a tragedy, AI doing the same is a disaster.
As a matter of fact, nobody will take responsibility for such an uncertain system. However, this initiative was just to create a possibility for the military to work with AI.