21st-may-banner design

Is ChatGPT Making Therapists Anxious?

People took to X to share their concerns about how using ChatGPT as a therapist was both sad and wrong, likening it to the “Eliza Effect”

Share

Listen to this story

Having confidently diagnosed a four-year-old’s mysterious disease that 17 doctors failed at, ChatGPT now goes a step ahead in the medical profession to don the therapist’s hat.

Lillian Weng, head of safety systems at OpenAI, recently had a long heart-to-heart conversation with ChatGPT in voice mode (a recent update). Interestingly, even though she never sought therapy, Weng feels that ChatGPT is a good therapist. 

Weng’s conversation with ChatGPT sparked mixed reactions. Users took to X to share their concerns on how using ChatGPT as a therapist was both sad and wrong, likening it to the “Eliza Effect”, where people assign human-like emotions to AI.

This was exemplified by the early “therapy chatbot” Eliza by MIT scientist Joseph Weizenbaum in 1966, highlighting the tendency of chatbots to mirror users’ language without true understanding. Eliza unintentionally drew people into deep and emotional conversations, revealing the potential for human attachment to AI. 

However, Weng is not the first person to treat ChatGPT as a therapist. Users find it convenient and appreciate its empathetic responses, although mental health experts express concerns about its limitations. 

Despite these concerns, individuals have found ChatGPT helpful in offering practical advice and a human-like interaction, making it a unique alternative for those unable or unwilling to seek professional therapy.

Thirty-seven-year-old EMT Dan initially used ChatGPT for creative writing, but found solace in discussing his real-life struggles with the chatbot, especially when it came to cognitive reframing—a technique suggested by his therapist.

Twenty-seven-year-old Gillian also used ChatGPT for therapy, considering the skyrocketing cost of healthcare. On the other hand, a Belgian man tragically succumbed to suicide following six weeks of “seeking therapy” with Chai.AI chatbot. 

However, considering the increasing charges of therapy, often not covered by health insurance, people tend to gravitate towards LLM-based chatbots like Bard, ChatGPT, and Perplexity AI. 

While AI can offer advice and support, it cannot diagnose specific mental health conditions or provide accurate treatment details. Some worry that users might be disappointed, misled, or compromise their privacy by confiding in the chatbot. 

Can Chatbots Replace Therapists? 

Traditionally, chatbots have been “stateless”, treating each new request as an independent interaction without recollection or learning from past conversations. However, GPT-4 introduces a new calling function that enables it to remember user input from previous interactions, resulting in a highly personalised experience.

Now, with ChatGPT’s ability to engage in natural language conversations, humans are more prone to forming attachments. According to a research paper by the University of Tennessee and Illinois State, interaction with an AI model can trigger the same emotional responses as interacting with a human.

“A person expresses their true self more when interacting with generative AI models, providing an experience nearly identical to human interaction while eliminating the need to carefully consider words before speaking,” noted Nikita (Zeb) Shringarpure, a psychology professor at Mumbai University.

This highlights the growing dependency of humans on AI, as it reduces cognitive effort and draws people towards tasks requiring less mental exertion.

Furthermore, LLMs possess the ability to simulate human characteristics, showcasing distinct personalities shaped by biological and environmental influences. These personalities play a crucial role in influencing interactions and preferences, blurring the lines between human and artificial intelligence interactions. 

In a recently published paper by Google DeepMind, larger and instruction-fine-tuned LLMs show stronger evidence of reliability and validity in synthetic personality generation. The study also reveals the possibility of shaping LLMs to imitate human behaviour, including matching different human personalities, as seen in their actions, such as creating posts on social media.

Decoding the Sentience Debate

Someone finding solace in sharing their stories with ChatGPT in voice and getting attached to it is not new. Eugenia Kuyda’s Replika chatbot helped many people cope with symptoms of social anxiety, depression, and PTSD, TIME reported. Many people fell in love with the chatbot as well.

Humans have formed emotional connections with AI chatbots for a long time now, sparking interest in the phenomenon of para-social relationships. 

“These connections, though fantastical, emulate genuine human bonds. The potential of AI to develop its own identity and attain sentience opens up limitless possibilities,” clinical psychologist Hemalatha S told AIM. 

The concept of AI becoming sentient has been debated for a while now. Back in June, Google’s Blake Lemoine was fired for calling the big tech’s LLM LaMDA sentient. Now with companies like Microsoft, OpenAI, and Google racing towards AGI that can replicate the cognitive abilities of humans, the prospect of AI having consciousness is being widely debated in recent times.

Not just Lemoine, OpenAI’s cofounder Ilya Sutskevar and Andrej Karpathy also received backlash on X for a similar thought. 

LLM chatbots allow us to tailor companions as per our preferences. The prospect of creating ideal partners, be they platonic, romantic, professional, or therapeutic, is a notable aspect of AI’s impact on human relationships, Hemlatha added.

However, there’s a cautionary note about the unforeseen consequences and potential evolution of AI into sentient entities. 

Though ChatGPT-like chatbots may not replace professional therapy, their intriguing resemblance to talk therapy sparks user interest, aligning with individuals’ conceptualization. This could drive increased engagement in formal therapy sessions.

Share
Picture of Shritama Saha

Shritama Saha

Shritama (she/her) is a technology journalist at AIM who is passionate to explore the influence of AI on different domains including fashion, healthcare and banks.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.