Listen to this story
A group of researchers have come to the terrifying conclusion that containing super-intelligence AI may not be possible. They claim that controlling the AI would fall beyond human comprehension.
According to the Journal of Artificial Intelligence Research, in the paper titled, ‘Superintelligence Cannot be Contained: Lessons from Computability Theory’, researchers have argued that total containment (in principle) would be impossible due to fundamental limits inherent to computing. It further claims that it is mathematically impossible for humans to calculate an AI’s plans, thereby making it uncontainable.
Sign up for your weekly dose of what's up in emerging technology.
The authors cite that implementing a rule for artificial intelligence to “cause no harm to humans” would not be an option if humans cannot predict the scenarios that an AI may come up with. They believe that while a computer system is working on an independent level, humans can no longer set limits. The team’s reasoning was inspired in part by Alan Turing‘s formulation of the halting problem in 1936. The problem centres on knowing whether a computer programme will reach a conclusion or an answer; either making it halt or simply loop forever trying to find one.
An excerpt of the paper reads, “This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”
Computer scientist, Iyad Rahwan, Max-Planck Institute for Human Development, Germany said “In effect, this makes the containment algorithm unusable”. Meaning, machines perform certain important tasks independently, without the programmers fully understanding how they learned it.
However, alternatives have been suggested by the researchers on teaching AI some ethics. Limiting the potential of superintelligence could prevent AIs from annihilating the world, even if they remain unpredictable.