During the summers of 1956, soon after, Alan Turing had commenced a preliminary attempt to discover how machines can be made to learn, a small group of scientists had gathered at Dartmouth College in New Hampshire to turn their wildly optimistic theories into material reality.
Their proposal resonated a confident attempt to find out mechanisms which could make machines use languages, develop concepts and find problems which are ordinarily reserved to be solved by human beings. An effective scrutiny would also suggest that they were quite successful, with due respect to their social circumstances.
Some of their notable discoveries involved machines that could solve logic puzzles, do calculations, one of them engaged in proving Bertrand Russell’s principia Mathematica, while the famous ELIZA, was able to communicate in a patterned natural language format.
However, there was a consensus among the scientists after concluding the conference that the main problem for them was not processing capacity of machines, it was their competence which limited their ability to take full advantage of the machines at their disposal.
To put this assertion in purely layman’s terms, an iPhone 6 which was released about half a decade ago, can perform calculations 100,000 times faster than the IBM 7030, a multi-million-dollar supercomputer from the same era.
They may incur a holy cow moment as reckoned by the computer geeks from Silicon Valley. It refers to the feeling where an individual realizes that AIs are progressing lot faster than expected. Its a phenomenon where we appear to predict progress, largely on the basis of past performances as an indicator. I intend to take a surprising revisit to this concept as this article keeps evolving.
The same could be said about representatives of National Institute for Transforming India (NITI Aayog), who on the occasion of releasing a draft document for the governance of artificial narrow intelligent agents went up to the extent of quoting that AIs can never be conscious. As a matter of relevance, this article is focused upon a sceptical enquiry of the respective statement.
A notable coincident is that the statement was made on the same day, where the Department of Telecommunication was commemorating 25 years of the first ever “mobile to mobile” call in India. Who would’ve thought in two decades from then, this respective devise will play such a pivotal role in shaping our socio-cultural identities.
The term consciousness as much as we love to explore its possessions on a Sunday evening at a cocktail party with our colleagues, still remains hard to explain through scientific contours. Promising developments have come from schools of philosophy, psychology and spirituality but they have to be read independently of each other.
For the sake of this article and with due acceptance of my limited understanding, consciousness in human beings shall refer to the ability of decision making, based upon combinations of past experiences, with the ultimate objective to attain individual or collective benefits for existence.
Therefore, if we were to presume that the statement made by the representative of NITI Aayog is valid, we must subsequently be convinced of two inherent premises. Firstly, consciousness cannot be logically processed as a computational program into Artificial Intelligent agents ever in the future. Secondly, consciousness is a desirable and important feature that Artificial Intelligent agents must possess in order to improve its effectiveness substantially.
In order to counter the first inherent premise, I shall abide by my promise to bring back the explanation forwarded with the holy cow moment. It’s only logical to assert from their statement that the firm prediction in the field of consciousness and artificial intelligence, is based on the frivolous notion of biases. Our greatest exposure to AI agents today is in the form of narrow intelligence and this is what I believe to be the main indicator they have relied upon while making a heavily controversial statement.
This statement also discards the prevailing sentiments of rationalist school of thought who go to the extent of saying that a Super Intelligent Artificial agent is the last invention that the human civilization will have to indulge in. It would be tough to convince a follower of the respective school of thought that a Super Intelligent agent cannot produce a computational form of consciousness. In fact, the follower would be able to convince you that the respective agent will be at a much better position to echo constructive explanation of consciousness.
to act in a changing and uncertain world, it may be necessary to build machines with very open goals, and an ability to reflect on how to achieve and even adapt those goals. These may be steps on the road to a conscious machine.Toby Walsh writes in 2062: The World That AI Made
Despite, its complex and unexplainable structure, consciousness thus has its own consolidated channel of advantages. We have been the evident beneficiaries and it is unjustified to say that we won’t experience attempts made on this front to produce a computational form of consciousness.
Moving on to the second inherent premise, where we determine the substantially desirable nature of consciousness in AI agents. We do this by committing to a systemic enquiry which must be broken down into three levels of interrogation.
Firstly, whether AIs necessarily have to be conscious in order to provide a reasonable and deep-rooted existential threat to human civilization. This is something I have indirectly answered in my previous article through the help of paper clip maximizer experiment as explained by Nick Bostrom.
For the purpose of this question, it is just important to note that an Artificial General Intelligent agent has to merely follow classical goals of value alignment which emphasis on the survival of human individuals to cause an existential threat to us. Therefore, it is unclear whether consciousness can objectively lead to existential threat however it is not the only mechanism through which a threat can be attained.
Secondly, whether human-level consciousness is integral for developing AI agents with no threats to our existence. Even if we presume this to be the reason, the respective concern is directly evaluated while dealing with the Value alignment problem. Additionally, even after groundbreaking revelations, if we succeed in developing a computational print of consciousness, enforcing subject-specific regulations will open avenues for biases in the process. While dealing with artificial narrow agents it must be kept in mind that the only objective for the agent is to achieve its goal, whether conscious or not is irrelevant.
Lastly, whether it is valid to give any consideration to the qualified distinction offered by consciousness in human beings. Despite its philosophical leaning, this is a very important question. A concise answer to the same can be understood by a mere glance at the state of things.
Highly eminent AI researcher, Eliezer Yudkowsky, through the AI Box experiment explained that an AI can concur lethal threat through minimal communication access even if it is stored inside a box or virtual prison. The core subject matter of experiment was to address the notion of uncertainty associated with artificial intelligence.
While formulating a response to this sensation we must be able to reflect upon the prevalent biases and weakness as the engagement shall be vital for our continued survival. Therefore, AIs present another opportunity for us to indulge in a coherent revaluation of consciousness which is a greater concern than the hypothetical remark that consciousness provides us with an upper hand while competing with AIs.
Join Our Discord Server. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Sujoy Sarkar is a third-year student pursuing B.A. LL.B from National Law School of India University, Bangalore. He aspires to become a Computer Programmer and experimental music producer. He's an avid supporter of cryptocurrencies and the objectives of singularitarianism.