Alan Turing is undoubtedly one of the most significant personalities in computer science. He is often credited as the father of modern Computer Science. But, apart from his contributions to theoretical computer science, he is also a pioneer of artificial intelligence, helping it develop as a field of research. His influential paper, Computing Machinery and Intelligence, was first published in 1950. At the start, he shows the problem with even defining the terms ‘machine’ and ‘think’, which is unsurprising. Many have attempted only to run into problems. I will agree with Turing (and many before him) on this because I am of the opinion that no definition can ever wholly capture the essence of ideas like ‘thought’ and ‘consciousness’. So we shouldn’t even be concerned with a formal definition of these, only their intuition. But then how do we test if a ‘machine’ can really ‘think’ and is ‘intelligent’? For this Turing takes the help of an experimental game.
‘The Imitation game’ (as Turing describes in the paper) or the ‘Turing Test’ (as it is popularly known today), is the experimental game. I would have wanted to add a brief explanation here for this game, but I think I wouldn’t do enough justice to the explanation, so I leave it to the reader, to read it in the original paper as described by Turing himself (it shouldn’t take more than 3 minutes).
The idea is that if a machine can pass the Turing test, then we may say a machine can think, or it is intelligent. Long story short, as of 2021, no machine has ever passed the Turing test. This goes on to show how we are still out of reach from the ‘intelligence’ component of Artificial Intelligence. There is a wide consensus today that the end goal of AI is artificial general intelligence (AGI) or artificial superintelligence (ASI), what we have today is artificial narrow intelligence, which is essentially mathematical models trying to make sense of data. ANI works extremely well on the narrow task assigned to it. But we are yet to generalize to more broad tasks, which cannot simply be achieved by stacking many ANI agents together, we really need generality. We need logic. We have this important question before us,
Sign up for your weekly dose of what's up in emerging technology.
Can logic even be mathematical? This might seem absurd given that logic is rigorously studied in mathematics, but that logic may not translate easily to the logic for generality.
In the next section of his paper, Turing analyzes why it is worthy to investigate the original question of thinking in machines in terms of the imitation game. A simple yet profound objection is also raised:
The game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does?
Turing himself describes this objection as ‘strong’ one but I find that the counter he gives to this is not sufficient. I would consider this objection something to ponder more on.
Next, we must try to define a ‘machine’, the definition of which may not be completely adequate. Turing explores this in the third section of his paper. Defining a machine should be easy (we think), but apparently, there can be issues that he explains. It is a very interesting section, which one would appreciate reading.
Perhaps, the most interesting section however is the sixth section. Here Turing comments on a lot of opposing viewpoints to the question of thinking in machines. This range a lot, from theology to mathematics. It is an extremely interesting section that forces YOU to think. One should not necessarily agree with Turing’s responses to these oppositions, but still, something to consider.
All in all, Turing’s paper is a very interesting and useful read. Today, a lot of focus is on building new and better models, using clever techniques on the data etc. But we should not forget the end goal. The simple thing is while the current method works great, they are far from enough. In the end, statistical learning may or may not be the answer to AGI and ASI. To achieve the final goals, we may have to think of radically new methods, for which an open mind is required. Our perspectives and our philosophy regarding AI will definitely help shape these new methods or improve the older ones.