‘Technology’ is defined as ‘the use of scientific knowledge for practical purposes’ (Oxford Dictionary). This terminology is adaptable to space and time it exists in – Olduvai inhabitants used stone scraping technology to butcher animals for sustenance 2.5 million years ago, and the plough was invented for agricultural purposes in 4000BC. With the foundation of telecommunication services and Charles Babbage’s first mechanical computer the ‘Difference Engine’ in 1822, a substratum for rapid advancements in digital and analogous apparatus was laid.
Artificial Intelligence (also known as AI) did not enter the techno-scene until the 1940s when the world’s first programmable digital computer was invented, and abstract mathematical reasoning was used to perform computational tasks. It is defined as ‘The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making and transition between languages’. (Russell and Norvig, 2003) In other words, AI is the simulation of human intelligence by machines through the processes of learning, reasoning, and self-correction. We interact with AI on an everyday basis, through video games, smartphone assistants like Siri or Cortana, purchase prediction by online retailers such as Amazon and Flipkart, customer support chat-bots, and more. The fascinating world of AI permeates our daily lives and has, in the last two decades, made significant strides in technological progress.
AI is closely linked to human psychology – the basis for research in the field stems from the network of neurons in the human brain. These neurons make connections based on your perception and outside stimuli, and transmit this information through electric and chemical signals, allowing adaptive learning to take place. AI finds its primary objective in emulating human neurological functions, hence mimicking the way rational human cognitive thinking takes place. Given this, its complexity and scope are increasing rapidly today. From ‘context awareness’ mechanisms bringing about an ability to think and create like humans to ‘natural language processing’ which gives AI a platform and method to communicate, every step seems to be one taken in the direction of making it wholly analogous to human functioning and thinking.
A precursor to answering the question of whether machines can one day independently think for themselves using AI must be to understand the connotations of ‘thinking’, and intelligence itself. Advancements in the technology of artificial intelligence are aimed at emulating human intelligence and eventually, indeed, creating machines that can ‘think’ for themselves and carry out functional tasks analogous to humans. ‘Thinking’ involves the conscious cognitive processes of the human mind such as processing information, engaging in problem-solving, decision making and reasoning. It is the longstanding goal of AI mechanisms to replicate the processes of thought to eventually replicate thought itself. Thinking allows humans to interpret the world around them and make analyses and predictions based on this understanding and AI aims to do exactly that. Alan Turing’s ‘Turing Test’ is a test of a machine’s ability to exhibit intelligence in behaviour on an equivalent or superordinate level relative to that of humans. In his work ‘Computing Machinery and Intelligence,’ he replaces the question ‘Can machines think?’ with ‘Are there imaginable digital computers which would do well in the imitation game?’ Given the difficulty in giving a concrete definition of the word ‘thinking’, the latter question, Turing believes, is one that is truly capable of being answered, while the former may be considered ‘too meaningless to deserve discussion.’ (Turing, 1950)
Over 100,000 people subscribe to our newsletter.
See stories of Analytics and AI in your inbox.
While human intelligence can be plainly defined as the ‘capacity to acquire and apply knowledge’, intelligence in AI includes usage of algorithmic coding for eventual self-programming, leading to the ability to decode and interpret data and make predictions based on it. In this sense, AI may indeed be termed as ‘intelligent’. However, thinking in AI includes the ability of a machine to write and code its own programs and interact with humans in a complex fashion. ‘Artificial General Intelligence’ or AGI refers to the ability of the AI mechanism to not only work in the context of a specific assigned task that it is trained for but to go beyond that by adapting to a variety of situations and reprogramming itself accordingly, and AGI is now the primary universal objective of AI developers.
One can say that AI mechanisms move towards a better imitation of the human biological neuron networks, psychology and thinking every day by mimicking human cognitive processes. This, however, remains devoid of conscience and emotional ability that is a part and parcel of the human psyche. Hence we may conclude that machines can think if the definition of the terminology is manipulated to suit the capabilities of technology. There is still, however, a scope for AI to make further strides through advancements in technology, keeping in line with the concept that the future may always be uncertain.