MITB Banner

The AI Awakening 

In the absence of grasping consciousness itself, can we truly comprehend conscious AI?

Share

Listen to this story

‘Can machines think?’ This is the question luminary English mathematician and computer scientist Alan Turing posed in his paper ‘Computing Machinery and Intelligence’. In response, Turing proposed the imitation game, which later came to be known as the ‘Turing test’ to determine whether a computer can exhibit intelligent behaviour indistinguishable from that of a human. But this was more than 70 years ago. Today, the most intriguing question in AI is whether the recent advancements could potentially lead to consciousness in AI.

The topic garnered significant attention since former Google engineer Blake Lemoine claimed that LaMDA, the chatbot he was testing, was sentient. With companies like OpenAI chasing AGI, a theoretical form of artificial intelligence that aims to replicate the general cognitive abilities of humans, the prospect of AI having consciousness is being widely debated in recent times. Even though AGI doesn’t inherently require consciousness, the question- ‘How will we know if any machine becomes conscious?’ is also being asked. 

A checklist to test consciousness in machines

Given the significant advancement in the field, many researchers and members of the AI community feel that ‘Turing test’ is no longer relevant. The test was designed to determine the level of intelligence in machines and not consciousness. However, recently, to test consciousness in AI, a group of 19 computer scientists, neuroscientists, and philosophers has proposed an approach involving an extensive checklist of attributes. While not providing a conclusive test for consciousness, this checklist contains 14 ‘indicator properties’ that a conscious AI model would be likely to display.

“We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive ‘indicator properties’ of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties,” the researchers state in a paper titled ‘Consciousness in Artificial Intelligence: Insights from the Science of Consciousness’.

The researchers tested current AI models like Google’s PaLM-E and DeepMind’s Adaptive Agent but found no significant evidence that any current models were conscious. Given LLMs like the GPT models’ mastery over the English language, they might appear to be a sentient being, however, that is not the case.

It’s important to note that even though the paper has caught the attention of many, it is pre-printed, which means it has not been peer-reviewed. Moreover, we also must take into account that the theories mentioned above are designed to test consciousness in humans, and the question remains whether the same theories can be applied to AI. 

While the researchers themselves have claimed that their research is a work in progress, some of them are working on a broader consciousness test that can also be applied to organoids, animals, and newborns. Moreover, “While the indicators themselves are subject to change as theories of consciousness evolve, we hope that this approach will help make the discussion of AI consciousness more objective,” Eric Elmoznino, one of the authors of the paper, said. 

Adds to the heightened hype

Lemoine lost his job at Google for claiming LaMDA is sentient. But he is not the only researcher making such claims. Previously, Ilya Sutskever, who co-founded OpenAI, also claimed that large neural networks may already be slightly conscious. Sutskever, similar to Lemoine, also received heavy backlash but kept his job. Interestingly, the researchers of the paper mentioned above have stated that the evidence suggests that if computational functionalism is true, conscious AI systems could realistically be built in the near term.

Additionally, many are of the opinion that a human embodiment, having senses and the ability to act in the physical world, is a prerequisite for consciousness. But there are contrasting views as well. “I have virtually no doubt that AI will eventually become consciousness for I do not think consciousness requires a specific physical basis (such as a brain) or anything besides suitable information processing capacity,” German Physicist Sabine Hossenfelder also said. But there is no concrete research to justify either claim as of yet.

On the flip side, some argue that discussions like this one tend to contribute to the already heightened hype around AI. Toby Walsh, an AI researcher at UNSW Sydney, believes that when speculative debates take centre stage, it requires several months of concerted effort to refocus attention on the practical opportunities and challenges presented by AI.

 It can overshadow the incremental advancements and practical implementations of AI technologies that are already making a positive impact in various industries. Focusing too much on hype can divert attention and resources from real-world AI applications that could bring tangible benefits.

Moreover, remarks like those made by Sutskever and Lemoine tend to contribute to the recent trend of fearmongering in AI. Reams and reams have already been written so far about the potential risks associated with AI achieving human-level intelligence or becoming conscious, with some even suggesting it could lead to human extinction. Giada Pistilli, principal ethicist at Hugging Face, had previously told AIM that what’s imperative is responsible reporting and contextual understanding of AI capabilities and benefits, without solely nourishing the fear narrative. 

We still don’t understand consciousness 

Moreover, consciousness is a complex and elusive phenomenon that has been the subject of philosophical inquiry, scientific research, and debate for centuries. Human understanding of the very concept of consciousness, in today’s age, is very limited, both from a philosophical, as well as a scientific point of view. 

Considering humanity’s limited understanding of consciousness, the endeavour to comprehend and create conscious AI remains a profound challenge. However, a counter-argument to this is that pursuing such goals can lead to valuable insights into both AI development and our understanding of consciousness itself. Nonetheless, as for now, humans endeavour to understand consciousness in itself and AI rages on.

Here’s a pretty old video from AIM asking the same old question – Is AI Conscious? 
https://www.youtube.com/watch?v=eFCsolNQyFU

Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.