Listen to this story
|
Having confidently diagnosed a four-year-old’s mysterious disease that 17 doctors failed at, ChatGPT now goes a step ahead in the medical profession to don the therapist’s hat.
Lillian Weng, head of safety systems at OpenAI, recently had a long heart-to-heart conversation with ChatGPT in voice mode (a recent update). Interestingly, even though she never sought therapy, Weng feels that ChatGPT is a good therapist.
Weng’s conversation with ChatGPT sparked mixed reactions. Users took to X to share their concerns on how using ChatGPT as a therapist was both sad and wrong, likening it to the “Eliza Effect”, where people assign human-like emotions to AI.
This was exemplified by the early “therapy chatbot” Eliza by MIT scientist Joseph Weizenbaum in 1966, highlighting the tendency of chatbots to mirror users’ language without true understanding. Eliza unintentionally drew people into deep and emotional conversations, revealing the potential for human attachment to AI.
However, Weng is not the first person to treat ChatGPT as a therapist. Users find it convenient and appreciate its empathetic responses, although mental health experts express concerns about its limitations.
Despite these concerns, individuals have found ChatGPT helpful in offering practical advice and a human-like interaction, making it a unique alternative for those unable or unwilling to seek professional therapy.
Thirty-seven-year-old EMT Dan initially used ChatGPT for creative writing, but found solace in discussing his real-life struggles with the chatbot, especially when it came to cognitive reframing—a technique suggested by his therapist.
Twenty-seven-year-old Gillian also used ChatGPT for therapy, considering the skyrocketing cost of healthcare. On the other hand, a Belgian man tragically succumbed to suicide following six weeks of “seeking therapy” with Chai.AI chatbot.
However, considering the increasing charges of therapy, often not covered by health insurance, people tend to gravitate towards LLM-based chatbots like Bard, ChatGPT, and Perplexity AI.
While AI can offer advice and support, it cannot diagnose specific mental health conditions or provide accurate treatment details. Some worry that users might be disappointed, misled, or compromise their privacy by confiding in the chatbot.
Can Chatbots Replace Therapists?
Traditionally, chatbots have been “stateless”, treating each new request as an independent interaction without recollection or learning from past conversations. However, GPT-4 introduces a new calling function that enables it to remember user input from previous interactions, resulting in a highly personalised experience.
Now, with ChatGPT’s ability to engage in natural language conversations, humans are more prone to forming attachments. According to a research paper by the University of Tennessee and Illinois State, interaction with an AI model can trigger the same emotional responses as interacting with a human.
“A person expresses their true self more when interacting with generative AI models, providing an experience nearly identical to human interaction while eliminating the need to carefully consider words before speaking,” noted Nikita (Zeb) Shringarpure, a psychology professor at Mumbai University.
This highlights the growing dependency of humans on AI, as it reduces cognitive effort and draws people towards tasks requiring less mental exertion.
Furthermore, LLMs possess the ability to simulate human characteristics, showcasing distinct personalities shaped by biological and environmental influences. These personalities play a crucial role in influencing interactions and preferences, blurring the lines between human and artificial intelligence interactions.
In a recently published paper by Google DeepMind, larger and instruction-fine-tuned LLMs show stronger evidence of reliability and validity in synthetic personality generation. The study also reveals the possibility of shaping LLMs to imitate human behaviour, including matching different human personalities, as seen in their actions, such as creating posts on social media.
Decoding the Sentience Debate
Someone finding solace in sharing their stories with ChatGPT in voice and getting attached to it is not new. Eugenia Kuyda’s Replika chatbot helped many people cope with symptoms of social anxiety, depression, and PTSD, TIME reported. Many people fell in love with the chatbot as well.
Humans have formed emotional connections with AI chatbots for a long time now, sparking interest in the phenomenon of para-social relationships.
“These connections, though fantastical, emulate genuine human bonds. The potential of AI to develop its own identity and attain sentience opens up limitless possibilities,” clinical psychologist Hemalatha S told AIM.
The concept of AI becoming sentient has been debated for a while now. Back in June, Google’s Blake Lemoine was fired for calling the big tech’s LLM LaMDA sentient. Now with companies like Microsoft, OpenAI, and Google racing towards AGI that can replicate the cognitive abilities of humans, the prospect of AI having consciousness is being widely debated in recent times.
Not just Lemoine, OpenAI’s cofounder Ilya Sutskevar and Andrej Karpathy also received backlash on X for a similar thought.
LLM chatbots allow us to tailor companions as per our preferences. The prospect of creating ideal partners, be they platonic, romantic, professional, or therapeutic, is a notable aspect of AI’s impact on human relationships, Hemlatha added.
However, there’s a cautionary note about the unforeseen consequences and potential evolution of AI into sentient entities.
Though ChatGPT-like chatbots may not replace professional therapy, their intriguing resemblance to talk therapy sparks user interest, aligning with individuals’ conceptualization. This could drive increased engagement in formal therapy sessions.