Now Reading
Between A & I: Theoretical Questions About Strong Vs Weak As AI Becomes Mainstream

Between A & I: Theoretical Questions About Strong Vs Weak As AI Becomes Mainstream

ai robots


“The future has already arrived. It’s just not evenly distributed yet.”

William Gibson

As artificial intelligence enters the world taking over many of the roles we inherently saw as the uncontested domain of flesh and blood humans; we are left with profound questions about the world and our place in it. We are entering a new territory, blind, the outcomes and consequences of our actions are difficult to gauge at this time. Artificial intelligence technologies are asking the human species as a whole to reevaluate what it means to be both intelligent and human. At the root of this AI-related dilemma lays the paradox and controversy surrounding the differences in nature and goals of Strong and Weak AI. Some would suggest that these two domains of inquiry are entirely different with completely dissimilar outcomes in the end of human society.



Let us first look at the differences between the two divergent artificial intelligence approaches, and from there we can proceed to understand the challenges each are posing for the field as a whole. The generally accepted definition of Strong-AI is that these agents will be artificial persons. Machines who are computational and non-biological persons; machines that have all the mental powers an average human have, including ‘phenomenal consciousness’. Whereas Weak-AI is only the appearance of human intelligence, by creating the process that addresses questions to arrive at correct answers to problems; that is answers an informed human being may arrive at. Yet, if we look at the gold standard for artificial intelligence tests, the Turing Test posited in 1950 is in a way only designed to measure for Strong-AI on a formal level using “observable outcomes”. Yet, at present most complex AI systems be they powered by logic-based or non-logic based processes are firmly in the domain of Weak-AI. It is clear then that we are here faced with a paradox, that is we can test only for Strong-AI but have been able to develop only Weak-AI.

Turing & Searle’s Room: Strong AI Or Weak AI?

We begin this journey into the paradox of Strong-AI in front of the two closed doors of Alan Turing’s Imitation Game. In Turing’s famous game, which he suggested as a test for functional Artificial Intelligence, an observer attempts to ascertain the nature of the two intelligent entities behind closed doors communicating with her by asking written questions in a natural language and not computer code. The same questions were to be answered by both a human participant and an AI agent. The fundamental question to be answered by the observer was, which of the entities was a computer-agent and which a human mind. Turing gave some parameters for this experiment, the conversation was to last five minutes, and the interrogation proceeds along the general lines of inquiry. He predicted that at the end of the 20th century, there would exist complex computers and computer programs able to deceive a reasonable human observer over 30% of the time, over a course of observations. Turing’s prediction has been not completely achieved but the ubiquity of chatbots and autonomous answering systems have brought us much closer to the humanlike communication Turing hoped for in the 1950s.

But the question is, is only the appearance of human communication, the substance of intelligence itself? This brings us to the core of the philosophical and even technical problem about the nature of the AI technology being developed today. For that we have to first arrive at a working definition of General-AI; which is difficult considering that AI, by its nature of being “Artificial” and “Intelligent” has to work against a border that is being consistently pushed back.

For whatever humanlike function can be mathematized and turned into techniques and processes ceases immediately to be AI and becomes a normal machine function. According to the Stanford Encyclopedia of Philosophy, AI is the ability of machines to replicate faculties considered solely the domain of human intelligence, that is functions which were thought to be only be fulfilled by a human mind. This is where the controversy of the difference between Strong and Weak AI emerges. It is centred in the very definition of Artificial Intelligence.

The physicist and philosopher Roger Penrose writes, “[T] he objective of AI is to imitate by means of machines, normally electronic ones, as much of human mental activity as possible.” Further, Penrose breaks up the field into two broad domains, one is that of Robotics and the other expert systems. Robotics is the application of machine learning to the problems of physical processes of the human world, navigation, manufacture and transportation; whereas expert systems are the mathematical codification and application of the entire body of knowledge of expert fields like medicine and law. Both these applications are firmly within the domain of Weak-AI. Bringing us to the central dilemma of the AI field, that is the application of the technology up till now has produced instances of only Weak-AI, even when the stated goal is Strong-AI. The fault lies in the philosophical understanding of human intelligence itself which makes Strong-AI for now is an impossibility. Computer Scientists, Stuart Russell and Peter Norvig have summarized the central premises of the field of AI; one is to design ideally rational agents and the other is to create agents who resemble humans. For in the end, it might not be enough to be formally rational to resemble humans. Russel and Norvig have summarized the four possible goals of the AI field by grouping them under their two initial premises. 


Human-Based Ideal-Rational 
Reasoning Based Systems that think like humans. Systems that think rationally.
Behaviour Based Systems that act like humans.Systems that act rationally.

It is clear then it is not enough to develop Strong-AI by solely being rational. For example, there have been opposition to the Turing test on the grounds that being a procedural test, it is possible to deceive the human interrogator by a complex system of algorithmic decision making without any true act of comprehension. John Searle’s Chinese Room Argument is a notable example of such opposition. It imagines a scenario that instead of an AI computer, a human subject converses with another human interrogator from behind closed doors in a written format in the Chinese language, whereas the interrogator is a Chinese speaker, the participant is not. Yet, the participant manages to convince the Chinese speaker using complex instructions written in English that she too can converse in Chinese. Will then the participant be said to “know” or “comprehend” Chinese? This gap between the result and the understanding of the result is what is holding up Strong-AI for now. 

Weak AI is the human equivalent of memorizing the tables, when you can give a pre-given “correct answer” to a problem without investing cognitive assets. But did you perform the multiplication in your mind, when you reached the result? Possibly not, because the process was procedural, where the correct answer for the problem 5 x 7 = had to be 35. Whereas as Strong-AI is the ability to perform the multiplication and from the tables of 7 to posit the same results for tables of 21. Which is capacity as opposed to procedure; for now we are caught up in procedural complexities and have not proceeded to recreate the volatile substrate of human intelligence. Results from researches in neural networks are promising in this regard as they are developing independent models for transferable problems by mimicking the overlapping neural patterns of the brain. 

But, since we started this section with Turing let us first ascertain, If the Imitation Game itself is a good test for Strong-AI? It is because Turing’s founding contribution to the field of AI is establishing the criteria for Artificial Intelligence in a functionalist approach to intelligence. Human intelligence is what human intelligence does. Turing does not concern himself with the phenomenological questions surrounding intelligence. He does not ask how or what does it ‘feel’ to be intelligent? So from the very first moments, the field of AI research is caught in an attempt to recreate observable functional copies of processes that recreate results of human intelligence. This functionalist approach to intelligence leaves out questions about the nature of understanding, comprehension or cognition completely. AI researchers are miles away from the dark interiors of any intelligent mind. The fundamental question is then if there is no ‘inside’ to AI technologies, can there then ever be a Strong-AI.

See Also

AI: True Mind Or Mind-Metaphor?

In the very same chapter Penrose gives his definition of AI, he raises a fundamental question, between the difference of mechanical imitation from cognitive ‘understanding’. Will the AI behind the doors of the Imitation game ever be a True Mind with all the attendant burdens and joys of understanding? Or will it be just a False Mind, a mechanical automaton giving the right output to an input, much like a typewriter? This is a crucial question to our understanding the effect of AI in the world. For however fast and reliable a typewriter is, it will and can never be granted legal personhood. For after all typewriters do not have souls. 

Will the artificial beings we manufacture have agency and consciousness? Will they have an entanglement of intelligence, self and emotion which the well-known computer scientist and philosopher Douglas Hofstadter calls a soul? These are difficult questions, but there may be some inkling towards the answers if we attempt to parse through them. 

The notable contribution of Douglas Hofstadter to our inquiry into the nature of Strong-AI is that he envisions intelligence as a byproduct of consciousness; a quality that emerges out of the recursive processes that give birth to consciousness. Interestingly, all intelligent beings are conscious, but not all conscious beings are intelligent. Hofstadter captures this phenomenon in the observation that ‘soul come in different sizes.’ For Hofstadter consciousness is tied up with self-awareness. He even envisions a scale of soulness measured by a unit he calls Hunekers. For example, a fly is said to have only a glimmer of self-awareness; in that, it is barely conscious and correspondingly barely intelligent, but a blue whale is much more conscious, able to feel pain, remorse, grief and the beauty of a well-sung whale song. From this position, Hofstadter posits a soul scale ranging from creatures who exhibit barely conscious behaviour to creatures with a plethora of complex behaviours. If we think through Hofstadter’s lens, it will appear that intelligence is the spear point of life’s design to develop a self out of the strange loop of matter.

The primary hurdle then in the path to a Strong-AI is that present Weak-AI systems based on algorithmic, 0 or 1 binaries only focus on the results of the human intelligence and are far away from the questions of self and life? For as of now we are years away from systems that can display a desire for self-preservation and expressions, one of the fundamental marks of human intelligence. This gap is the central hurdle on the path to AI for it is not enough to design ideally logical AI agents but it is necessary to make them resemble humans with their general purpose intelligence and that path to Strong-AI takes the less travelled of emotions and understanding.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top